00:00:00.001 Started by upstream project "autotest-nightly" build number 3922 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3297 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.042 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.043 The recommended git tool is: git 00:00:00.043 using credential 00000000-0000-0000-0000-000000000002 00:00:00.045 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.070 Fetching changes from the remote Git repository 00:00:00.071 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.108 Using shallow fetch with depth 1 00:00:00.108 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.108 > git --version # timeout=10 00:00:00.167 > git --version # 'git version 2.39.2' 00:00:00.167 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.218 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.218 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.739 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.749 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.758 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:03.758 > git config core.sparsecheckout # timeout=10 00:00:03.768 > git read-tree -mu HEAD # timeout=10 00:00:03.782 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:03.799 Commit message: "packer: Add bios builder" 00:00:03.799 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:03.894 [Pipeline] Start of Pipeline 00:00:03.907 [Pipeline] library 00:00:03.909 Loading library shm_lib@master 00:00:03.909 Library shm_lib@master is cached. Copying from home. 00:00:03.923 [Pipeline] node 00:00:03.931 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.933 [Pipeline] { 00:00:03.941 [Pipeline] catchError 00:00:03.942 [Pipeline] { 00:00:03.953 [Pipeline] wrap 00:00:03.961 [Pipeline] { 00:00:03.967 [Pipeline] stage 00:00:03.968 [Pipeline] { (Prologue) 00:00:04.130 [Pipeline] sh 00:00:04.410 + logger -p user.info -t JENKINS-CI 00:00:04.427 [Pipeline] echo 00:00:04.429 Node: GP11 00:00:04.434 [Pipeline] sh 00:00:04.725 [Pipeline] setCustomBuildProperty 00:00:04.732 [Pipeline] echo 00:00:04.733 Cleanup processes 00:00:04.737 [Pipeline] sh 00:00:05.015 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.015 4107067 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.028 [Pipeline] sh 00:00:05.306 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.306 ++ grep -v 'sudo pgrep' 00:00:05.306 ++ awk '{print $1}' 00:00:05.306 + sudo kill -9 00:00:05.306 + true 00:00:05.318 [Pipeline] cleanWs 00:00:05.325 [WS-CLEANUP] Deleting project workspace... 00:00:05.325 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.332 [WS-CLEANUP] done 00:00:05.335 [Pipeline] setCustomBuildProperty 00:00:05.345 [Pipeline] sh 00:00:05.623 + sudo git config --global --replace-all safe.directory '*' 00:00:05.694 [Pipeline] httpRequest 00:00:05.712 [Pipeline] echo 00:00:05.713 Sorcerer 10.211.164.101 is alive 00:00:05.718 [Pipeline] httpRequest 00:00:05.722 HttpMethod: GET 00:00:05.722 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:05.723 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:05.725 Response Code: HTTP/1.1 200 OK 00:00:05.726 Success: Status code 200 is in the accepted range: 200,404 00:00:05.726 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:06.492 [Pipeline] sh 00:00:06.773 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:06.785 [Pipeline] httpRequest 00:00:06.799 [Pipeline] echo 00:00:06.800 Sorcerer 10.211.164.101 is alive 00:00:06.806 [Pipeline] httpRequest 00:00:06.810 HttpMethod: GET 00:00:06.810 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:06.811 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:06.828 Response Code: HTTP/1.1 200 OK 00:00:06.829 Success: Status code 200 is in the accepted range: 200,404 00:00:06.829 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:52.547 [Pipeline] sh 00:00:52.830 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:56.132 [Pipeline] sh 00:00:56.417 + git -C spdk log --oneline -n5 00:00:56.417 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:00:56.417 fc2398dfa raid: clear base bdev configure_cb after executing 00:00:56.417 5558f3f50 raid: complete bdev_raid_create after sb is written 00:00:56.417 d005e023b raid: fix empty slot not updated in sb after resize 00:00:56.417 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:00:56.429 [Pipeline] } 00:00:56.447 [Pipeline] // stage 00:00:56.457 [Pipeline] stage 00:00:56.459 [Pipeline] { (Prepare) 00:00:56.478 [Pipeline] writeFile 00:00:56.496 [Pipeline] sh 00:00:56.780 + logger -p user.info -t JENKINS-CI 00:00:56.793 [Pipeline] sh 00:00:57.076 + logger -p user.info -t JENKINS-CI 00:00:57.088 [Pipeline] sh 00:00:57.390 + cat autorun-spdk.conf 00:00:57.390 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.390 SPDK_TEST_NVMF=1 00:00:57.390 SPDK_TEST_NVME_CLI=1 00:00:57.390 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:57.390 SPDK_TEST_NVMF_NICS=e810 00:00:57.390 SPDK_RUN_ASAN=1 00:00:57.390 SPDK_RUN_UBSAN=1 00:00:57.390 NET_TYPE=phy 00:00:57.398 RUN_NIGHTLY=1 00:00:57.402 [Pipeline] readFile 00:00:57.427 [Pipeline] withEnv 00:00:57.430 [Pipeline] { 00:00:57.443 [Pipeline] sh 00:00:57.727 + set -ex 00:00:57.727 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:57.727 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:57.727 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.727 ++ SPDK_TEST_NVMF=1 00:00:57.727 ++ SPDK_TEST_NVME_CLI=1 00:00:57.727 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:57.727 ++ SPDK_TEST_NVMF_NICS=e810 00:00:57.727 ++ SPDK_RUN_ASAN=1 00:00:57.727 ++ SPDK_RUN_UBSAN=1 00:00:57.727 ++ NET_TYPE=phy 00:00:57.727 ++ RUN_NIGHTLY=1 00:00:57.727 + case $SPDK_TEST_NVMF_NICS in 00:00:57.727 + DRIVERS=ice 00:00:57.727 + [[ tcp == \r\d\m\a ]] 00:00:57.727 + [[ -n ice ]] 00:00:57.727 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:57.727 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:57.727 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:57.727 rmmod: ERROR: Module irdma is not currently loaded 00:00:57.727 rmmod: ERROR: Module i40iw is not currently loaded 00:00:57.727 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:57.727 + true 00:00:57.727 + for D in $DRIVERS 00:00:57.727 + sudo modprobe ice 00:00:57.727 + exit 0 00:00:57.736 [Pipeline] } 00:00:57.754 [Pipeline] // withEnv 00:00:57.760 [Pipeline] } 00:00:57.777 [Pipeline] // stage 00:00:57.788 [Pipeline] catchError 00:00:57.790 [Pipeline] { 00:00:57.808 [Pipeline] timeout 00:00:57.808 Timeout set to expire in 50 min 00:00:57.810 [Pipeline] { 00:00:57.824 [Pipeline] stage 00:00:57.827 [Pipeline] { (Tests) 00:00:57.842 [Pipeline] sh 00:00:58.124 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.124 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.124 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.124 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:58.124 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:58.124 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:58.124 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:58.124 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:58.124 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:58.124 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:58.124 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:58.124 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.124 + source /etc/os-release 00:00:58.124 ++ NAME='Fedora Linux' 00:00:58.124 ++ VERSION='38 (Cloud Edition)' 00:00:58.124 ++ ID=fedora 00:00:58.124 ++ VERSION_ID=38 00:00:58.124 ++ VERSION_CODENAME= 00:00:58.124 ++ PLATFORM_ID=platform:f38 00:00:58.124 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:58.124 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:58.124 ++ LOGO=fedora-logo-icon 00:00:58.124 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:58.124 ++ HOME_URL=https://fedoraproject.org/ 00:00:58.125 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:58.125 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:58.125 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:58.125 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:58.125 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:58.125 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:58.125 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:58.125 ++ SUPPORT_END=2024-05-14 00:00:58.125 ++ VARIANT='Cloud Edition' 00:00:58.125 ++ VARIANT_ID=cloud 00:00:58.125 + uname -a 00:00:58.125 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:58.125 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:59.060 Hugepages 00:00:59.060 node hugesize free / total 00:00:59.060 node0 1048576kB 0 / 0 00:00:59.060 node0 2048kB 0 / 0 00:00:59.060 node1 1048576kB 0 / 0 00:00:59.060 node1 2048kB 0 / 0 00:00:59.060 00:00:59.060 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:59.060 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:59.060 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:59.060 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:59.060 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:59.060 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:59.060 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:59.060 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:59.060 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:59.060 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:59.060 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:59.060 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:59.060 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:59.060 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:59.060 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:59.060 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:59.060 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:59.060 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:59.060 + rm -f /tmp/spdk-ld-path 00:00:59.060 + source autorun-spdk.conf 00:00:59.060 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.060 ++ SPDK_TEST_NVMF=1 00:00:59.060 ++ SPDK_TEST_NVME_CLI=1 00:00:59.060 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:59.060 ++ SPDK_TEST_NVMF_NICS=e810 00:00:59.060 ++ SPDK_RUN_ASAN=1 00:00:59.060 ++ SPDK_RUN_UBSAN=1 00:00:59.060 ++ NET_TYPE=phy 00:00:59.060 ++ RUN_NIGHTLY=1 00:00:59.060 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:59.060 + [[ -n '' ]] 00:00:59.060 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:59.060 + for M in /var/spdk/build-*-manifest.txt 00:00:59.060 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:59.060 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:59.060 + for M in /var/spdk/build-*-manifest.txt 00:00:59.060 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:59.060 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:59.060 ++ uname 00:00:59.060 + [[ Linux == \L\i\n\u\x ]] 00:00:59.060 + sudo dmesg -T 00:00:59.319 + sudo dmesg --clear 00:00:59.319 + dmesg_pid=4107761 00:00:59.319 + [[ Fedora Linux == FreeBSD ]] 00:00:59.319 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:59.319 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:59.319 + sudo dmesg -Tw 00:00:59.319 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:59.319 + [[ -x /usr/src/fio-static/fio ]] 00:00:59.319 + export FIO_BIN=/usr/src/fio-static/fio 00:00:59.319 + FIO_BIN=/usr/src/fio-static/fio 00:00:59.319 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:59.319 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:59.319 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:59.319 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:59.319 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:59.319 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:59.319 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:59.319 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:59.319 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:59.319 Test configuration: 00:00:59.319 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.319 SPDK_TEST_NVMF=1 00:00:59.319 SPDK_TEST_NVME_CLI=1 00:00:59.319 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:59.319 SPDK_TEST_NVMF_NICS=e810 00:00:59.319 SPDK_RUN_ASAN=1 00:00:59.319 SPDK_RUN_UBSAN=1 00:00:59.319 NET_TYPE=phy 00:00:59.319 RUN_NIGHTLY=1 05:53:10 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:59.319 05:53:10 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:59.319 05:53:10 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:59.319 05:53:10 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:59.319 05:53:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:59.319 05:53:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:59.319 05:53:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:59.320 05:53:10 -- paths/export.sh@5 -- $ export PATH 00:00:59.320 05:53:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:59.320 05:53:10 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:59.320 05:53:10 -- common/autobuild_common.sh@447 -- $ date +%s 00:00:59.320 05:53:10 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721965990.XXXXXX 00:00:59.320 05:53:10 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721965990.EBF7CQ 00:00:59.320 05:53:10 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:00:59.320 05:53:10 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:00:59.320 05:53:10 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:59.320 05:53:10 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:59.320 05:53:10 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:59.320 05:53:10 -- common/autobuild_common.sh@463 -- $ get_config_params 00:00:59.320 05:53:10 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:00:59.320 05:53:10 -- common/autotest_common.sh@10 -- $ set +x 00:00:59.320 05:53:10 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:00:59.320 05:53:10 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:00:59.320 05:53:10 -- pm/common@17 -- $ local monitor 00:00:59.320 05:53:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:59.320 05:53:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:59.320 05:53:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:59.320 05:53:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:59.320 05:53:10 -- pm/common@21 -- $ date +%s 00:00:59.320 05:53:10 -- pm/common@21 -- $ date +%s 00:00:59.320 05:53:10 -- pm/common@25 -- $ sleep 1 00:00:59.320 05:53:10 -- pm/common@21 -- $ date +%s 00:00:59.320 05:53:10 -- pm/common@21 -- $ date +%s 00:00:59.320 05:53:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721965990 00:00:59.320 05:53:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721965990 00:00:59.320 05:53:10 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721965990 00:00:59.320 05:53:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721965990 00:00:59.320 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721965990_collect-vmstat.pm.log 00:00:59.320 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721965990_collect-cpu-load.pm.log 00:00:59.320 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721965990_collect-cpu-temp.pm.log 00:00:59.320 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721965990_collect-bmc-pm.bmc.pm.log 00:01:00.255 05:53:11 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:00.255 05:53:11 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:00.255 05:53:11 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:00.255 05:53:11 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:00.255 05:53:11 -- spdk/autobuild.sh@16 -- $ date -u 00:01:00.255 Fri Jul 26 03:53:11 AM UTC 2024 00:01:00.256 05:53:11 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:00.256 v24.09-pre-321-g704257090 00:01:00.256 05:53:11 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:00.256 05:53:11 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:00.256 05:53:11 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:00.256 05:53:11 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:00.256 05:53:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:00.256 ************************************ 00:01:00.256 START TEST asan 00:01:00.256 ************************************ 00:01:00.256 05:53:11 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:00.256 using asan 00:01:00.256 00:01:00.256 real 0m0.000s 00:01:00.256 user 0m0.000s 00:01:00.256 sys 0m0.000s 00:01:00.256 05:53:11 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:00.256 05:53:11 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:00.256 ************************************ 00:01:00.256 END TEST asan 00:01:00.256 ************************************ 00:01:00.256 05:53:11 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:00.256 05:53:11 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:00.256 05:53:11 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:00.256 05:53:11 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:00.256 05:53:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:00.256 ************************************ 00:01:00.256 START TEST ubsan 00:01:00.256 ************************************ 00:01:00.256 05:53:11 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:00.256 using ubsan 00:01:00.256 00:01:00.256 real 0m0.000s 00:01:00.256 user 0m0.000s 00:01:00.256 sys 0m0.000s 00:01:00.256 05:53:11 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:00.256 05:53:11 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:00.256 ************************************ 00:01:00.256 END TEST ubsan 00:01:00.256 ************************************ 00:01:00.515 05:53:11 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:00.515 05:53:11 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:00.515 05:53:11 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:00.515 05:53:11 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:00.515 05:53:11 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:00.515 05:53:11 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:00.515 05:53:11 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:00.515 05:53:11 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:00.515 05:53:11 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:00.515 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:00.515 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:00.773 Using 'verbs' RDMA provider 00:01:11.319 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:21.295 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:21.295 Creating mk/config.mk...done. 00:01:21.295 Creating mk/cc.flags.mk...done. 00:01:21.295 Type 'make' to build. 00:01:21.295 05:53:31 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:21.295 05:53:31 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:21.295 05:53:31 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:21.295 05:53:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.295 ************************************ 00:01:21.295 START TEST make 00:01:21.295 ************************************ 00:01:21.295 05:53:31 make -- common/autotest_common.sh@1125 -- $ make -j48 00:01:21.295 make[1]: Nothing to be done for 'all'. 00:01:29.462 The Meson build system 00:01:29.462 Version: 1.3.1 00:01:29.462 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:29.462 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:29.462 Build type: native build 00:01:29.462 Program cat found: YES (/usr/bin/cat) 00:01:29.462 Project name: DPDK 00:01:29.462 Project version: 24.03.0 00:01:29.462 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:29.462 C linker for the host machine: cc ld.bfd 2.39-16 00:01:29.462 Host machine cpu family: x86_64 00:01:29.462 Host machine cpu: x86_64 00:01:29.462 Message: ## Building in Developer Mode ## 00:01:29.462 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:29.462 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:29.462 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:29.462 Program python3 found: YES (/usr/bin/python3) 00:01:29.462 Program cat found: YES (/usr/bin/cat) 00:01:29.462 Compiler for C supports arguments -march=native: YES 00:01:29.462 Checking for size of "void *" : 8 00:01:29.462 Checking for size of "void *" : 8 (cached) 00:01:29.462 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:29.462 Library m found: YES 00:01:29.462 Library numa found: YES 00:01:29.462 Has header "numaif.h" : YES 00:01:29.462 Library fdt found: NO 00:01:29.462 Library execinfo found: NO 00:01:29.462 Has header "execinfo.h" : YES 00:01:29.462 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:29.462 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:29.462 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:29.462 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:29.462 Run-time dependency openssl found: YES 3.0.9 00:01:29.462 Run-time dependency libpcap found: YES 1.10.4 00:01:29.462 Has header "pcap.h" with dependency libpcap: YES 00:01:29.462 Compiler for C supports arguments -Wcast-qual: YES 00:01:29.462 Compiler for C supports arguments -Wdeprecated: YES 00:01:29.462 Compiler for C supports arguments -Wformat: YES 00:01:29.463 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:29.463 Compiler for C supports arguments -Wformat-security: NO 00:01:29.463 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:29.463 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:29.463 Compiler for C supports arguments -Wnested-externs: YES 00:01:29.463 Compiler for C supports arguments -Wold-style-definition: YES 00:01:29.463 Compiler for C supports arguments -Wpointer-arith: YES 00:01:29.463 Compiler for C supports arguments -Wsign-compare: YES 00:01:29.463 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:29.463 Compiler for C supports arguments -Wundef: YES 00:01:29.463 Compiler for C supports arguments -Wwrite-strings: YES 00:01:29.463 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:29.463 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:29.463 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:29.463 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:29.463 Program objdump found: YES (/usr/bin/objdump) 00:01:29.463 Compiler for C supports arguments -mavx512f: YES 00:01:29.463 Checking if "AVX512 checking" compiles: YES 00:01:29.463 Fetching value of define "__SSE4_2__" : 1 00:01:29.463 Fetching value of define "__AES__" : 1 00:01:29.463 Fetching value of define "__AVX__" : 1 00:01:29.463 Fetching value of define "__AVX2__" : (undefined) 00:01:29.463 Fetching value of define "__AVX512BW__" : (undefined) 00:01:29.463 Fetching value of define "__AVX512CD__" : (undefined) 00:01:29.463 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:29.463 Fetching value of define "__AVX512F__" : (undefined) 00:01:29.463 Fetching value of define "__AVX512VL__" : (undefined) 00:01:29.463 Fetching value of define "__PCLMUL__" : 1 00:01:29.463 Fetching value of define "__RDRND__" : 1 00:01:29.463 Fetching value of define "__RDSEED__" : (undefined) 00:01:29.463 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:29.463 Fetching value of define "__znver1__" : (undefined) 00:01:29.463 Fetching value of define "__znver2__" : (undefined) 00:01:29.463 Fetching value of define "__znver3__" : (undefined) 00:01:29.463 Fetching value of define "__znver4__" : (undefined) 00:01:29.463 Library asan found: YES 00:01:29.463 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:29.463 Message: lib/log: Defining dependency "log" 00:01:29.463 Message: lib/kvargs: Defining dependency "kvargs" 00:01:29.463 Message: lib/telemetry: Defining dependency "telemetry" 00:01:29.463 Library rt found: YES 00:01:29.463 Checking for function "getentropy" : NO 00:01:29.463 Message: lib/eal: Defining dependency "eal" 00:01:29.463 Message: lib/ring: Defining dependency "ring" 00:01:29.463 Message: lib/rcu: Defining dependency "rcu" 00:01:29.463 Message: lib/mempool: Defining dependency "mempool" 00:01:29.463 Message: lib/mbuf: Defining dependency "mbuf" 00:01:29.463 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:29.463 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:29.463 Compiler for C supports arguments -mpclmul: YES 00:01:29.463 Compiler for C supports arguments -maes: YES 00:01:29.463 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:29.463 Compiler for C supports arguments -mavx512bw: YES 00:01:29.463 Compiler for C supports arguments -mavx512dq: YES 00:01:29.463 Compiler for C supports arguments -mavx512vl: YES 00:01:29.463 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:29.463 Compiler for C supports arguments -mavx2: YES 00:01:29.463 Compiler for C supports arguments -mavx: YES 00:01:29.463 Message: lib/net: Defining dependency "net" 00:01:29.463 Message: lib/meter: Defining dependency "meter" 00:01:29.463 Message: lib/ethdev: Defining dependency "ethdev" 00:01:29.463 Message: lib/pci: Defining dependency "pci" 00:01:29.463 Message: lib/cmdline: Defining dependency "cmdline" 00:01:29.463 Message: lib/hash: Defining dependency "hash" 00:01:29.463 Message: lib/timer: Defining dependency "timer" 00:01:29.463 Message: lib/compressdev: Defining dependency "compressdev" 00:01:29.463 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:29.463 Message: lib/dmadev: Defining dependency "dmadev" 00:01:29.463 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:29.463 Message: lib/power: Defining dependency "power" 00:01:29.463 Message: lib/reorder: Defining dependency "reorder" 00:01:29.463 Message: lib/security: Defining dependency "security" 00:01:29.463 Has header "linux/userfaultfd.h" : YES 00:01:29.463 Has header "linux/vduse.h" : YES 00:01:29.463 Message: lib/vhost: Defining dependency "vhost" 00:01:29.463 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:29.463 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:29.463 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:29.463 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:29.463 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:29.463 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:29.463 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:29.463 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:29.463 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:29.463 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:29.463 Program doxygen found: YES (/usr/bin/doxygen) 00:01:29.463 Configuring doxy-api-html.conf using configuration 00:01:29.463 Configuring doxy-api-man.conf using configuration 00:01:29.463 Program mandb found: YES (/usr/bin/mandb) 00:01:29.463 Program sphinx-build found: NO 00:01:29.463 Configuring rte_build_config.h using configuration 00:01:29.463 Message: 00:01:29.463 ================= 00:01:29.463 Applications Enabled 00:01:29.463 ================= 00:01:29.463 00:01:29.463 apps: 00:01:29.463 00:01:29.463 00:01:29.463 Message: 00:01:29.463 ================= 00:01:29.463 Libraries Enabled 00:01:29.463 ================= 00:01:29.463 00:01:29.463 libs: 00:01:29.463 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:29.463 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:29.463 cryptodev, dmadev, power, reorder, security, vhost, 00:01:29.463 00:01:29.463 Message: 00:01:29.463 =============== 00:01:29.463 Drivers Enabled 00:01:29.463 =============== 00:01:29.463 00:01:29.463 common: 00:01:29.463 00:01:29.463 bus: 00:01:29.463 pci, vdev, 00:01:29.463 mempool: 00:01:29.463 ring, 00:01:29.463 dma: 00:01:29.463 00:01:29.463 net: 00:01:29.463 00:01:29.463 crypto: 00:01:29.463 00:01:29.463 compress: 00:01:29.463 00:01:29.463 vdpa: 00:01:29.463 00:01:29.463 00:01:29.463 Message: 00:01:29.463 ================= 00:01:29.463 Content Skipped 00:01:29.463 ================= 00:01:29.463 00:01:29.463 apps: 00:01:29.463 dumpcap: explicitly disabled via build config 00:01:29.463 graph: explicitly disabled via build config 00:01:29.463 pdump: explicitly disabled via build config 00:01:29.463 proc-info: explicitly disabled via build config 00:01:29.463 test-acl: explicitly disabled via build config 00:01:29.463 test-bbdev: explicitly disabled via build config 00:01:29.463 test-cmdline: explicitly disabled via build config 00:01:29.463 test-compress-perf: explicitly disabled via build config 00:01:29.463 test-crypto-perf: explicitly disabled via build config 00:01:29.463 test-dma-perf: explicitly disabled via build config 00:01:29.463 test-eventdev: explicitly disabled via build config 00:01:29.463 test-fib: explicitly disabled via build config 00:01:29.463 test-flow-perf: explicitly disabled via build config 00:01:29.463 test-gpudev: explicitly disabled via build config 00:01:29.463 test-mldev: explicitly disabled via build config 00:01:29.463 test-pipeline: explicitly disabled via build config 00:01:29.463 test-pmd: explicitly disabled via build config 00:01:29.463 test-regex: explicitly disabled via build config 00:01:29.463 test-sad: explicitly disabled via build config 00:01:29.463 test-security-perf: explicitly disabled via build config 00:01:29.463 00:01:29.463 libs: 00:01:29.463 argparse: explicitly disabled via build config 00:01:29.463 metrics: explicitly disabled via build config 00:01:29.463 acl: explicitly disabled via build config 00:01:29.463 bbdev: explicitly disabled via build config 00:01:29.463 bitratestats: explicitly disabled via build config 00:01:29.463 bpf: explicitly disabled via build config 00:01:29.463 cfgfile: explicitly disabled via build config 00:01:29.463 distributor: explicitly disabled via build config 00:01:29.463 efd: explicitly disabled via build config 00:01:29.463 eventdev: explicitly disabled via build config 00:01:29.463 dispatcher: explicitly disabled via build config 00:01:29.463 gpudev: explicitly disabled via build config 00:01:29.463 gro: explicitly disabled via build config 00:01:29.463 gso: explicitly disabled via build config 00:01:29.463 ip_frag: explicitly disabled via build config 00:01:29.464 jobstats: explicitly disabled via build config 00:01:29.464 latencystats: explicitly disabled via build config 00:01:29.464 lpm: explicitly disabled via build config 00:01:29.464 member: explicitly disabled via build config 00:01:29.464 pcapng: explicitly disabled via build config 00:01:29.464 rawdev: explicitly disabled via build config 00:01:29.464 regexdev: explicitly disabled via build config 00:01:29.464 mldev: explicitly disabled via build config 00:01:29.464 rib: explicitly disabled via build config 00:01:29.464 sched: explicitly disabled via build config 00:01:29.464 stack: explicitly disabled via build config 00:01:29.464 ipsec: explicitly disabled via build config 00:01:29.464 pdcp: explicitly disabled via build config 00:01:29.464 fib: explicitly disabled via build config 00:01:29.464 port: explicitly disabled via build config 00:01:29.464 pdump: explicitly disabled via build config 00:01:29.464 table: explicitly disabled via build config 00:01:29.464 pipeline: explicitly disabled via build config 00:01:29.464 graph: explicitly disabled via build config 00:01:29.464 node: explicitly disabled via build config 00:01:29.464 00:01:29.464 drivers: 00:01:29.464 common/cpt: not in enabled drivers build config 00:01:29.464 common/dpaax: not in enabled drivers build config 00:01:29.464 common/iavf: not in enabled drivers build config 00:01:29.464 common/idpf: not in enabled drivers build config 00:01:29.464 common/ionic: not in enabled drivers build config 00:01:29.464 common/mvep: not in enabled drivers build config 00:01:29.464 common/octeontx: not in enabled drivers build config 00:01:29.464 bus/auxiliary: not in enabled drivers build config 00:01:29.464 bus/cdx: not in enabled drivers build config 00:01:29.464 bus/dpaa: not in enabled drivers build config 00:01:29.464 bus/fslmc: not in enabled drivers build config 00:01:29.464 bus/ifpga: not in enabled drivers build config 00:01:29.464 bus/platform: not in enabled drivers build config 00:01:29.464 bus/uacce: not in enabled drivers build config 00:01:29.464 bus/vmbus: not in enabled drivers build config 00:01:29.464 common/cnxk: not in enabled drivers build config 00:01:29.464 common/mlx5: not in enabled drivers build config 00:01:29.464 common/nfp: not in enabled drivers build config 00:01:29.464 common/nitrox: not in enabled drivers build config 00:01:29.464 common/qat: not in enabled drivers build config 00:01:29.464 common/sfc_efx: not in enabled drivers build config 00:01:29.464 mempool/bucket: not in enabled drivers build config 00:01:29.464 mempool/cnxk: not in enabled drivers build config 00:01:29.464 mempool/dpaa: not in enabled drivers build config 00:01:29.464 mempool/dpaa2: not in enabled drivers build config 00:01:29.464 mempool/octeontx: not in enabled drivers build config 00:01:29.464 mempool/stack: not in enabled drivers build config 00:01:29.464 dma/cnxk: not in enabled drivers build config 00:01:29.464 dma/dpaa: not in enabled drivers build config 00:01:29.464 dma/dpaa2: not in enabled drivers build config 00:01:29.464 dma/hisilicon: not in enabled drivers build config 00:01:29.464 dma/idxd: not in enabled drivers build config 00:01:29.464 dma/ioat: not in enabled drivers build config 00:01:29.464 dma/skeleton: not in enabled drivers build config 00:01:29.464 net/af_packet: not in enabled drivers build config 00:01:29.464 net/af_xdp: not in enabled drivers build config 00:01:29.464 net/ark: not in enabled drivers build config 00:01:29.464 net/atlantic: not in enabled drivers build config 00:01:29.464 net/avp: not in enabled drivers build config 00:01:29.464 net/axgbe: not in enabled drivers build config 00:01:29.464 net/bnx2x: not in enabled drivers build config 00:01:29.464 net/bnxt: not in enabled drivers build config 00:01:29.464 net/bonding: not in enabled drivers build config 00:01:29.464 net/cnxk: not in enabled drivers build config 00:01:29.464 net/cpfl: not in enabled drivers build config 00:01:29.464 net/cxgbe: not in enabled drivers build config 00:01:29.464 net/dpaa: not in enabled drivers build config 00:01:29.464 net/dpaa2: not in enabled drivers build config 00:01:29.464 net/e1000: not in enabled drivers build config 00:01:29.464 net/ena: not in enabled drivers build config 00:01:29.464 net/enetc: not in enabled drivers build config 00:01:29.464 net/enetfec: not in enabled drivers build config 00:01:29.464 net/enic: not in enabled drivers build config 00:01:29.464 net/failsafe: not in enabled drivers build config 00:01:29.464 net/fm10k: not in enabled drivers build config 00:01:29.464 net/gve: not in enabled drivers build config 00:01:29.464 net/hinic: not in enabled drivers build config 00:01:29.464 net/hns3: not in enabled drivers build config 00:01:29.464 net/i40e: not in enabled drivers build config 00:01:29.464 net/iavf: not in enabled drivers build config 00:01:29.464 net/ice: not in enabled drivers build config 00:01:29.464 net/idpf: not in enabled drivers build config 00:01:29.464 net/igc: not in enabled drivers build config 00:01:29.464 net/ionic: not in enabled drivers build config 00:01:29.464 net/ipn3ke: not in enabled drivers build config 00:01:29.464 net/ixgbe: not in enabled drivers build config 00:01:29.464 net/mana: not in enabled drivers build config 00:01:29.464 net/memif: not in enabled drivers build config 00:01:29.464 net/mlx4: not in enabled drivers build config 00:01:29.464 net/mlx5: not in enabled drivers build config 00:01:29.464 net/mvneta: not in enabled drivers build config 00:01:29.464 net/mvpp2: not in enabled drivers build config 00:01:29.464 net/netvsc: not in enabled drivers build config 00:01:29.464 net/nfb: not in enabled drivers build config 00:01:29.464 net/nfp: not in enabled drivers build config 00:01:29.464 net/ngbe: not in enabled drivers build config 00:01:29.464 net/null: not in enabled drivers build config 00:01:29.464 net/octeontx: not in enabled drivers build config 00:01:29.464 net/octeon_ep: not in enabled drivers build config 00:01:29.464 net/pcap: not in enabled drivers build config 00:01:29.464 net/pfe: not in enabled drivers build config 00:01:29.464 net/qede: not in enabled drivers build config 00:01:29.464 net/ring: not in enabled drivers build config 00:01:29.464 net/sfc: not in enabled drivers build config 00:01:29.464 net/softnic: not in enabled drivers build config 00:01:29.464 net/tap: not in enabled drivers build config 00:01:29.464 net/thunderx: not in enabled drivers build config 00:01:29.464 net/txgbe: not in enabled drivers build config 00:01:29.464 net/vdev_netvsc: not in enabled drivers build config 00:01:29.464 net/vhost: not in enabled drivers build config 00:01:29.464 net/virtio: not in enabled drivers build config 00:01:29.464 net/vmxnet3: not in enabled drivers build config 00:01:29.464 raw/*: missing internal dependency, "rawdev" 00:01:29.464 crypto/armv8: not in enabled drivers build config 00:01:29.464 crypto/bcmfs: not in enabled drivers build config 00:01:29.464 crypto/caam_jr: not in enabled drivers build config 00:01:29.464 crypto/ccp: not in enabled drivers build config 00:01:29.464 crypto/cnxk: not in enabled drivers build config 00:01:29.464 crypto/dpaa_sec: not in enabled drivers build config 00:01:29.464 crypto/dpaa2_sec: not in enabled drivers build config 00:01:29.464 crypto/ipsec_mb: not in enabled drivers build config 00:01:29.464 crypto/mlx5: not in enabled drivers build config 00:01:29.464 crypto/mvsam: not in enabled drivers build config 00:01:29.464 crypto/nitrox: not in enabled drivers build config 00:01:29.464 crypto/null: not in enabled drivers build config 00:01:29.464 crypto/octeontx: not in enabled drivers build config 00:01:29.464 crypto/openssl: not in enabled drivers build config 00:01:29.464 crypto/scheduler: not in enabled drivers build config 00:01:29.464 crypto/uadk: not in enabled drivers build config 00:01:29.464 crypto/virtio: not in enabled drivers build config 00:01:29.464 compress/isal: not in enabled drivers build config 00:01:29.464 compress/mlx5: not in enabled drivers build config 00:01:29.464 compress/nitrox: not in enabled drivers build config 00:01:29.464 compress/octeontx: not in enabled drivers build config 00:01:29.464 compress/zlib: not in enabled drivers build config 00:01:29.464 regex/*: missing internal dependency, "regexdev" 00:01:29.464 ml/*: missing internal dependency, "mldev" 00:01:29.464 vdpa/ifc: not in enabled drivers build config 00:01:29.464 vdpa/mlx5: not in enabled drivers build config 00:01:29.464 vdpa/nfp: not in enabled drivers build config 00:01:29.464 vdpa/sfc: not in enabled drivers build config 00:01:29.464 event/*: missing internal dependency, "eventdev" 00:01:29.464 baseband/*: missing internal dependency, "bbdev" 00:01:29.464 gpu/*: missing internal dependency, "gpudev" 00:01:29.464 00:01:29.464 00:01:29.464 Build targets in project: 85 00:01:29.464 00:01:29.464 DPDK 24.03.0 00:01:29.464 00:01:29.464 User defined options 00:01:29.464 buildtype : debug 00:01:29.464 default_library : shared 00:01:29.464 libdir : lib 00:01:29.465 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:29.465 b_sanitize : address 00:01:29.465 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:29.465 c_link_args : 00:01:29.465 cpu_instruction_set: native 00:01:29.465 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:29.465 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:29.465 enable_docs : false 00:01:29.465 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:29.465 enable_kmods : false 00:01:29.465 max_lcores : 128 00:01:29.465 tests : false 00:01:29.465 00:01:29.465 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:29.729 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:29.729 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:29.729 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:29.729 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:29.729 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:29.729 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:29.729 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:29.729 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:29.729 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:29.729 [9/268] Linking static target lib/librte_kvargs.a 00:01:29.729 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:29.729 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:29.729 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:29.729 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:29.729 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:29.729 [15/268] Linking static target lib/librte_log.a 00:01:29.989 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:30.561 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.561 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:30.561 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:30.561 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:30.561 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:30.562 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:30.562 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:30.562 [24/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:30.562 [25/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:30.562 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:30.562 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:30.562 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:30.562 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:30.562 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:30.562 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:30.562 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:30.562 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:30.562 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:30.562 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:30.562 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:30.562 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:30.562 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:30.562 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:30.562 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:30.822 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:30.822 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:30.822 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:30.822 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:30.822 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:30.822 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:30.822 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:30.822 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:30.822 [49/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:30.822 [50/268] Linking static target lib/librte_telemetry.a 00:01:30.822 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:30.822 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:30.822 [53/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:30.822 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:30.822 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:30.823 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:30.823 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:30.823 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:30.823 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:30.823 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:31.086 [61/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.086 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:31.086 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:31.086 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:31.086 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:31.086 [66/268] Linking target lib/librte_log.so.24.1 00:01:31.348 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:31.616 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:31.616 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:31.616 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:31.616 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:31.616 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:31.616 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:31.616 [74/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:31.616 [75/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:31.616 [76/268] Linking static target lib/librte_pci.a 00:01:31.616 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:31.616 [78/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:31.616 [79/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:31.616 [80/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:31.616 [81/268] Linking target lib/librte_kvargs.so.24.1 00:01:31.616 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:31.616 [83/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:31.616 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:31.616 [85/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:31.616 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:31.616 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:31.880 [88/268] Linking static target lib/librte_meter.a 00:01:31.880 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:31.880 [90/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:31.881 [91/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:31.881 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:31.881 [93/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:31.881 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:31.881 [95/268] Linking static target lib/librte_ring.a 00:01:31.881 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:31.881 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:31.881 [98/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.881 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:31.881 [100/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:31.881 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:31.881 [102/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:31.881 [103/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:31.881 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:31.881 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:31.881 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:31.881 [107/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:31.881 [108/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:31.881 [109/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:31.881 [110/268] Linking target lib/librte_telemetry.so.24.1 00:01:31.881 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:31.881 [112/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.160 [113/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:32.161 [114/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:32.161 [115/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:32.161 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:32.161 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:32.161 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:32.161 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:32.161 [120/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:32.161 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:32.161 [122/268] Linking static target lib/librte_mempool.a 00:01:32.161 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:32.161 [124/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:32.161 [125/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:32.161 [126/268] Linking static target lib/librte_rcu.a 00:01:32.161 [127/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.442 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:32.442 [129/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:32.442 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:32.442 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:32.442 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:32.442 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:32.442 [134/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.442 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:32.705 [136/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:32.705 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:32.705 [138/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:32.705 [139/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:32.705 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:32.705 [141/268] Linking static target lib/librte_eal.a 00:01:32.705 [142/268] Linking static target lib/librte_cmdline.a 00:01:32.705 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:32.705 [144/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:32.705 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:32.963 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:32.963 [147/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:32.963 [148/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:32.963 [149/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:32.963 [150/268] Linking static target lib/librte_timer.a 00:01:32.963 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:32.963 [152/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.963 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:32.963 [154/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:32.963 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:32.963 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:33.223 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:33.223 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:33.223 [159/268] Linking static target lib/librte_dmadev.a 00:01:33.223 [160/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:33.223 [161/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.223 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:33.223 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:33.223 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.482 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:33.482 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:33.482 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:33.482 [168/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:33.482 [169/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:33.482 [170/268] Linking static target lib/librte_net.a 00:01:33.482 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:33.482 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:33.482 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:33.741 [174/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.741 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:33.741 [176/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:33.741 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:33.741 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:33.741 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.741 [180/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:33.741 [181/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:33.741 [182/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:33.741 [183/268] Linking static target lib/librte_power.a 00:01:33.741 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:33.741 [185/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:33.741 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:33.741 [187/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.000 [188/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:34.000 [189/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:34.000 [190/268] Linking static target lib/librte_hash.a 00:01:34.000 [191/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:34.000 [192/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:34.000 [193/268] Linking static target drivers/librte_bus_vdev.a 00:01:34.000 [194/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:34.000 [195/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:34.000 [196/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:34.000 [197/268] Linking static target drivers/librte_bus_pci.a 00:01:34.000 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:34.000 [199/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:34.000 [200/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:34.258 [201/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:34.259 [202/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.259 [203/268] Linking static target lib/librte_reorder.a 00:01:34.259 [204/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.259 [205/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:34.259 [206/268] Linking static target lib/librte_compressdev.a 00:01:34.259 [207/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:34.259 [208/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:34.259 [209/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:34.259 [210/268] Linking static target drivers/librte_mempool_ring.a 00:01:34.259 [211/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:34.517 [212/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.517 [213/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.517 [214/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.517 [215/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.776 [216/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:34.776 [217/268] Linking static target lib/librte_security.a 00:01:35.035 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.293 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:36.228 [220/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:36.228 [221/268] Linking static target lib/librte_mbuf.a 00:01:36.487 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:36.487 [223/268] Linking static target lib/librte_cryptodev.a 00:01:36.487 [224/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.423 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:37.423 [226/268] Linking static target lib/librte_ethdev.a 00:01:37.423 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.799 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.800 [229/268] Linking target lib/librte_eal.so.24.1 00:01:38.800 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:38.800 [231/268] Linking target lib/librte_meter.so.24.1 00:01:38.800 [232/268] Linking target lib/librte_timer.so.24.1 00:01:38.800 [233/268] Linking target lib/librte_dmadev.so.24.1 00:01:38.800 [234/268] Linking target lib/librte_pci.so.24.1 00:01:38.800 [235/268] Linking target lib/librte_ring.so.24.1 00:01:38.800 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:38.800 [237/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:38.800 [238/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:38.800 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:38.800 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:38.800 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:39.058 [242/268] Linking target lib/librte_rcu.so.24.1 00:01:39.058 [243/268] Linking target lib/librte_mempool.so.24.1 00:01:39.058 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:39.058 [245/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:39.058 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:39.058 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:39.058 [248/268] Linking target lib/librte_mbuf.so.24.1 00:01:39.316 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:39.316 [250/268] Linking target lib/librte_reorder.so.24.1 00:01:39.317 [251/268] Linking target lib/librte_compressdev.so.24.1 00:01:39.317 [252/268] Linking target lib/librte_net.so.24.1 00:01:39.317 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:01:39.317 [254/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:39.585 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:39.585 [256/268] Linking target lib/librte_hash.so.24.1 00:01:39.585 [257/268] Linking target lib/librte_cmdline.so.24.1 00:01:39.585 [258/268] Linking target lib/librte_security.so.24.1 00:01:39.585 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:40.157 [260/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:41.561 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.561 [262/268] Linking target lib/librte_ethdev.so.24.1 00:01:41.820 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:41.820 [264/268] Linking target lib/librte_power.so.24.1 00:02:03.759 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:03.759 [266/268] Linking static target lib/librte_vhost.a 00:02:03.759 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.759 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:04.017 INFO: autodetecting backend as ninja 00:02:04.017 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:04.951 CC lib/log/log.o 00:02:04.951 CC lib/log/log_flags.o 00:02:04.951 CC lib/log/log_deprecated.o 00:02:04.951 CC lib/ut/ut.o 00:02:04.951 CC lib/ut_mock/mock.o 00:02:05.209 LIB libspdk_log.a 00:02:05.209 LIB libspdk_ut.a 00:02:05.209 LIB libspdk_ut_mock.a 00:02:05.209 SO libspdk_log.so.7.0 00:02:05.209 SO libspdk_ut_mock.so.6.0 00:02:05.209 SO libspdk_ut.so.2.0 00:02:05.209 SYMLINK libspdk_ut_mock.so 00:02:05.209 SYMLINK libspdk_ut.so 00:02:05.209 SYMLINK libspdk_log.so 00:02:05.467 CC lib/dma/dma.o 00:02:05.467 CXX lib/trace_parser/trace.o 00:02:05.467 CC lib/ioat/ioat.o 00:02:05.467 CC lib/util/base64.o 00:02:05.467 CC lib/util/bit_array.o 00:02:05.468 CC lib/util/cpuset.o 00:02:05.468 CC lib/util/crc16.o 00:02:05.468 CC lib/util/crc32.o 00:02:05.468 CC lib/util/crc32c.o 00:02:05.468 CC lib/util/crc32_ieee.o 00:02:05.468 CC lib/util/crc64.o 00:02:05.468 CC lib/util/dif.o 00:02:05.468 CC lib/util/fd.o 00:02:05.468 CC lib/util/fd_group.o 00:02:05.468 CC lib/util/file.o 00:02:05.468 CC lib/util/hexlify.o 00:02:05.468 CC lib/util/iov.o 00:02:05.468 CC lib/util/math.o 00:02:05.468 CC lib/util/net.o 00:02:05.468 CC lib/util/pipe.o 00:02:05.468 CC lib/util/strerror_tls.o 00:02:05.468 CC lib/util/string.o 00:02:05.468 CC lib/util/xor.o 00:02:05.468 CC lib/util/uuid.o 00:02:05.468 CC lib/util/zipf.o 00:02:05.468 CC lib/vfio_user/host/vfio_user_pci.o 00:02:05.468 CC lib/vfio_user/host/vfio_user.o 00:02:05.468 LIB libspdk_dma.a 00:02:05.726 SO libspdk_dma.so.4.0 00:02:05.726 SYMLINK libspdk_dma.so 00:02:05.726 LIB libspdk_ioat.a 00:02:05.726 SO libspdk_ioat.so.7.0 00:02:05.726 LIB libspdk_vfio_user.a 00:02:05.726 SYMLINK libspdk_ioat.so 00:02:05.726 SO libspdk_vfio_user.so.5.0 00:02:05.726 SYMLINK libspdk_vfio_user.so 00:02:05.985 LIB libspdk_util.a 00:02:06.243 SO libspdk_util.so.10.0 00:02:06.243 SYMLINK libspdk_util.so 00:02:06.501 LIB libspdk_trace_parser.a 00:02:06.501 SO libspdk_trace_parser.so.5.0 00:02:06.501 CC lib/rdma_provider/common.o 00:02:06.501 CC lib/json/json_parse.o 00:02:06.501 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:06.501 CC lib/json/json_util.o 00:02:06.501 CC lib/rdma_utils/rdma_utils.o 00:02:06.501 CC lib/json/json_write.o 00:02:06.501 CC lib/conf/conf.o 00:02:06.501 CC lib/env_dpdk/env.o 00:02:06.501 CC lib/vmd/vmd.o 00:02:06.501 CC lib/idxd/idxd.o 00:02:06.501 CC lib/env_dpdk/memory.o 00:02:06.501 CC lib/vmd/led.o 00:02:06.501 CC lib/env_dpdk/pci.o 00:02:06.501 CC lib/idxd/idxd_user.o 00:02:06.501 CC lib/env_dpdk/init.o 00:02:06.501 CC lib/idxd/idxd_kernel.o 00:02:06.501 CC lib/env_dpdk/threads.o 00:02:06.501 CC lib/env_dpdk/pci_ioat.o 00:02:06.501 CC lib/env_dpdk/pci_virtio.o 00:02:06.501 CC lib/env_dpdk/pci_vmd.o 00:02:06.501 CC lib/env_dpdk/pci_idxd.o 00:02:06.501 CC lib/env_dpdk/pci_event.o 00:02:06.501 CC lib/env_dpdk/sigbus_handler.o 00:02:06.501 CC lib/env_dpdk/pci_dpdk.o 00:02:06.501 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:06.501 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:06.501 SYMLINK libspdk_trace_parser.so 00:02:06.759 LIB libspdk_rdma_provider.a 00:02:06.759 SO libspdk_rdma_provider.so.6.0 00:02:06.759 LIB libspdk_conf.a 00:02:06.759 SO libspdk_conf.so.6.0 00:02:06.759 SYMLINK libspdk_rdma_provider.so 00:02:06.759 LIB libspdk_rdma_utils.a 00:02:06.759 SO libspdk_rdma_utils.so.1.0 00:02:06.759 LIB libspdk_json.a 00:02:06.759 SYMLINK libspdk_conf.so 00:02:06.759 SO libspdk_json.so.6.0 00:02:06.759 SYMLINK libspdk_rdma_utils.so 00:02:07.018 SYMLINK libspdk_json.so 00:02:07.018 CC lib/jsonrpc/jsonrpc_server.o 00:02:07.018 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:07.018 CC lib/jsonrpc/jsonrpc_client.o 00:02:07.018 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:07.276 LIB libspdk_idxd.a 00:02:07.276 SO libspdk_idxd.so.12.0 00:02:07.276 SYMLINK libspdk_idxd.so 00:02:07.276 LIB libspdk_jsonrpc.a 00:02:07.534 SO libspdk_jsonrpc.so.6.0 00:02:07.534 LIB libspdk_vmd.a 00:02:07.534 SO libspdk_vmd.so.6.0 00:02:07.534 SYMLINK libspdk_jsonrpc.so 00:02:07.534 SYMLINK libspdk_vmd.so 00:02:07.534 CC lib/rpc/rpc.o 00:02:07.792 LIB libspdk_rpc.a 00:02:08.050 SO libspdk_rpc.so.6.0 00:02:08.051 SYMLINK libspdk_rpc.so 00:02:08.051 CC lib/keyring/keyring.o 00:02:08.051 CC lib/notify/notify.o 00:02:08.051 CC lib/trace/trace.o 00:02:08.051 CC lib/keyring/keyring_rpc.o 00:02:08.051 CC lib/trace/trace_flags.o 00:02:08.051 CC lib/notify/notify_rpc.o 00:02:08.051 CC lib/trace/trace_rpc.o 00:02:08.309 LIB libspdk_notify.a 00:02:08.309 SO libspdk_notify.so.6.0 00:02:08.309 SYMLINK libspdk_notify.so 00:02:08.309 LIB libspdk_keyring.a 00:02:08.568 LIB libspdk_trace.a 00:02:08.568 SO libspdk_keyring.so.1.0 00:02:08.568 SO libspdk_trace.so.10.0 00:02:08.568 SYMLINK libspdk_keyring.so 00:02:08.568 SYMLINK libspdk_trace.so 00:02:08.826 CC lib/sock/sock.o 00:02:08.826 CC lib/thread/thread.o 00:02:08.826 CC lib/sock/sock_rpc.o 00:02:08.826 CC lib/thread/iobuf.o 00:02:09.084 LIB libspdk_sock.a 00:02:09.084 SO libspdk_sock.so.10.0 00:02:09.343 SYMLINK libspdk_sock.so 00:02:09.343 LIB libspdk_env_dpdk.a 00:02:09.343 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:09.343 CC lib/nvme/nvme_ctrlr.o 00:02:09.343 CC lib/nvme/nvme_fabric.o 00:02:09.343 CC lib/nvme/nvme_ns_cmd.o 00:02:09.343 CC lib/nvme/nvme_ns.o 00:02:09.343 CC lib/nvme/nvme_pcie_common.o 00:02:09.343 CC lib/nvme/nvme_pcie.o 00:02:09.343 CC lib/nvme/nvme_qpair.o 00:02:09.343 CC lib/nvme/nvme.o 00:02:09.343 CC lib/nvme/nvme_quirks.o 00:02:09.343 CC lib/nvme/nvme_transport.o 00:02:09.343 CC lib/nvme/nvme_discovery.o 00:02:09.343 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:09.343 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:09.343 CC lib/nvme/nvme_tcp.o 00:02:09.343 CC lib/nvme/nvme_opal.o 00:02:09.343 CC lib/nvme/nvme_io_msg.o 00:02:09.343 CC lib/nvme/nvme_poll_group.o 00:02:09.343 SO libspdk_env_dpdk.so.15.0 00:02:09.343 CC lib/nvme/nvme_zns.o 00:02:09.343 CC lib/nvme/nvme_stubs.o 00:02:09.343 CC lib/nvme/nvme_cuse.o 00:02:09.343 CC lib/nvme/nvme_auth.o 00:02:09.343 CC lib/nvme/nvme_rdma.o 00:02:09.602 SYMLINK libspdk_env_dpdk.so 00:02:10.979 LIB libspdk_thread.a 00:02:10.979 SO libspdk_thread.so.10.1 00:02:10.979 SYMLINK libspdk_thread.so 00:02:10.979 CC lib/init/json_config.o 00:02:10.979 CC lib/virtio/virtio.o 00:02:10.979 CC lib/accel/accel.o 00:02:10.979 CC lib/blob/blobstore.o 00:02:10.979 CC lib/init/subsystem.o 00:02:10.979 CC lib/accel/accel_rpc.o 00:02:10.979 CC lib/virtio/virtio_vhost_user.o 00:02:10.979 CC lib/blob/request.o 00:02:10.979 CC lib/init/subsystem_rpc.o 00:02:10.979 CC lib/accel/accel_sw.o 00:02:10.979 CC lib/virtio/virtio_vfio_user.o 00:02:10.979 CC lib/init/rpc.o 00:02:10.979 CC lib/blob/zeroes.o 00:02:10.979 CC lib/virtio/virtio_pci.o 00:02:10.979 CC lib/blob/blob_bs_dev.o 00:02:11.237 LIB libspdk_init.a 00:02:11.237 SO libspdk_init.so.5.0 00:02:11.495 SYMLINK libspdk_init.so 00:02:11.495 LIB libspdk_virtio.a 00:02:11.495 SO libspdk_virtio.so.7.0 00:02:11.495 SYMLINK libspdk_virtio.so 00:02:11.495 CC lib/event/app.o 00:02:11.495 CC lib/event/reactor.o 00:02:11.495 CC lib/event/log_rpc.o 00:02:11.495 CC lib/event/app_rpc.o 00:02:11.495 CC lib/event/scheduler_static.o 00:02:12.132 LIB libspdk_event.a 00:02:12.132 SO libspdk_event.so.14.0 00:02:12.132 SYMLINK libspdk_event.so 00:02:12.390 LIB libspdk_accel.a 00:02:12.390 LIB libspdk_nvme.a 00:02:12.390 SO libspdk_accel.so.16.0 00:02:12.390 SYMLINK libspdk_accel.so 00:02:12.390 SO libspdk_nvme.so.13.1 00:02:12.647 CC lib/bdev/bdev.o 00:02:12.647 CC lib/bdev/bdev_rpc.o 00:02:12.647 CC lib/bdev/bdev_zone.o 00:02:12.647 CC lib/bdev/part.o 00:02:12.647 CC lib/bdev/scsi_nvme.o 00:02:12.647 SYMLINK libspdk_nvme.so 00:02:15.178 LIB libspdk_blob.a 00:02:15.178 SO libspdk_blob.so.11.0 00:02:15.178 SYMLINK libspdk_blob.so 00:02:15.436 CC lib/blobfs/blobfs.o 00:02:15.436 CC lib/blobfs/tree.o 00:02:15.436 CC lib/lvol/lvol.o 00:02:16.003 LIB libspdk_bdev.a 00:02:16.003 SO libspdk_bdev.so.16.0 00:02:16.003 SYMLINK libspdk_bdev.so 00:02:16.269 CC lib/ublk/ublk.o 00:02:16.269 CC lib/nbd/nbd.o 00:02:16.269 CC lib/scsi/dev.o 00:02:16.269 CC lib/ublk/ublk_rpc.o 00:02:16.269 CC lib/scsi/lun.o 00:02:16.269 CC lib/nbd/nbd_rpc.o 00:02:16.269 CC lib/scsi/port.o 00:02:16.269 CC lib/scsi/scsi.o 00:02:16.269 CC lib/nvmf/ctrlr.o 00:02:16.269 CC lib/scsi/scsi_bdev.o 00:02:16.269 CC lib/scsi/scsi_pr.o 00:02:16.269 CC lib/nvmf/ctrlr_discovery.o 00:02:16.269 CC lib/scsi/scsi_rpc.o 00:02:16.269 CC lib/nvmf/ctrlr_bdev.o 00:02:16.269 CC lib/scsi/task.o 00:02:16.269 CC lib/nvmf/subsystem.o 00:02:16.269 CC lib/ftl/ftl_core.o 00:02:16.269 CC lib/nvmf/nvmf.o 00:02:16.269 CC lib/ftl/ftl_init.o 00:02:16.269 CC lib/ftl/ftl_layout.o 00:02:16.269 CC lib/nvmf/nvmf_rpc.o 00:02:16.269 CC lib/ftl/ftl_debug.o 00:02:16.269 CC lib/nvmf/transport.o 00:02:16.269 CC lib/ftl/ftl_sb.o 00:02:16.269 CC lib/ftl/ftl_io.o 00:02:16.269 CC lib/nvmf/tcp.o 00:02:16.269 CC lib/nvmf/stubs.o 00:02:16.269 CC lib/ftl/ftl_l2p.o 00:02:16.269 CC lib/nvmf/rdma.o 00:02:16.269 CC lib/nvmf/mdns_server.o 00:02:16.269 CC lib/ftl/ftl_l2p_flat.o 00:02:16.269 CC lib/ftl/ftl_nv_cache.o 00:02:16.269 CC lib/nvmf/auth.o 00:02:16.269 CC lib/ftl/ftl_band.o 00:02:16.269 CC lib/ftl/ftl_band_ops.o 00:02:16.269 CC lib/ftl/ftl_writer.o 00:02:16.269 CC lib/ftl/ftl_reloc.o 00:02:16.269 CC lib/ftl/ftl_rq.o 00:02:16.269 CC lib/ftl/ftl_l2p_cache.o 00:02:16.269 CC lib/ftl/ftl_p2l.o 00:02:16.269 CC lib/ftl/mngt/ftl_mngt.o 00:02:16.269 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:16.269 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:16.269 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:16.269 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:16.269 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:16.534 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:16.534 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:16.534 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:16.534 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:16.534 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:16.534 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:16.534 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:16.534 CC lib/ftl/utils/ftl_conf.o 00:02:16.534 CC lib/ftl/utils/ftl_md.o 00:02:16.534 CC lib/ftl/utils/ftl_mempool.o 00:02:16.534 CC lib/ftl/utils/ftl_bitmap.o 00:02:16.534 CC lib/ftl/utils/ftl_property.o 00:02:16.534 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:16.793 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:16.793 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:16.793 LIB libspdk_blobfs.a 00:02:16.793 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:16.793 SO libspdk_blobfs.so.10.0 00:02:16.793 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:16.793 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:16.793 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:16.793 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:16.793 SYMLINK libspdk_blobfs.so 00:02:16.793 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:16.793 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:16.793 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:16.793 CC lib/ftl/base/ftl_base_dev.o 00:02:16.793 CC lib/ftl/base/ftl_base_bdev.o 00:02:16.793 CC lib/ftl/ftl_trace.o 00:02:17.052 LIB libspdk_lvol.a 00:02:17.052 SO libspdk_lvol.so.10.0 00:02:17.052 LIB libspdk_nbd.a 00:02:17.052 SO libspdk_nbd.so.7.0 00:02:17.052 SYMLINK libspdk_lvol.so 00:02:17.310 SYMLINK libspdk_nbd.so 00:02:17.310 LIB libspdk_scsi.a 00:02:17.310 SO libspdk_scsi.so.9.0 00:02:17.569 SYMLINK libspdk_scsi.so 00:02:17.569 LIB libspdk_ublk.a 00:02:17.569 SO libspdk_ublk.so.3.0 00:02:17.569 SYMLINK libspdk_ublk.so 00:02:17.569 CC lib/iscsi/conn.o 00:02:17.569 CC lib/vhost/vhost.o 00:02:17.569 CC lib/vhost/vhost_rpc.o 00:02:17.569 CC lib/iscsi/init_grp.o 00:02:17.569 CC lib/vhost/vhost_scsi.o 00:02:17.569 CC lib/iscsi/iscsi.o 00:02:17.569 CC lib/iscsi/md5.o 00:02:17.569 CC lib/vhost/vhost_blk.o 00:02:17.569 CC lib/vhost/rte_vhost_user.o 00:02:17.569 CC lib/iscsi/param.o 00:02:17.569 CC lib/iscsi/portal_grp.o 00:02:17.569 CC lib/iscsi/tgt_node.o 00:02:17.569 CC lib/iscsi/iscsi_subsystem.o 00:02:17.569 CC lib/iscsi/iscsi_rpc.o 00:02:17.569 CC lib/iscsi/task.o 00:02:18.136 LIB libspdk_ftl.a 00:02:18.136 SO libspdk_ftl.so.9.0 00:02:18.702 SYMLINK libspdk_ftl.so 00:02:18.960 LIB libspdk_vhost.a 00:02:18.960 SO libspdk_vhost.so.8.0 00:02:19.218 SYMLINK libspdk_vhost.so 00:02:19.474 LIB libspdk_iscsi.a 00:02:19.474 SO libspdk_iscsi.so.8.0 00:02:19.730 LIB libspdk_nvmf.a 00:02:19.730 SO libspdk_nvmf.so.19.0 00:02:19.730 SYMLINK libspdk_iscsi.so 00:02:19.986 SYMLINK libspdk_nvmf.so 00:02:20.243 CC module/env_dpdk/env_dpdk_rpc.o 00:02:20.243 CC module/scheduler/gscheduler/gscheduler.o 00:02:20.243 CC module/blob/bdev/blob_bdev.o 00:02:20.243 CC module/sock/posix/posix.o 00:02:20.243 CC module/accel/dsa/accel_dsa.o 00:02:20.243 CC module/accel/iaa/accel_iaa.o 00:02:20.243 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:20.243 CC module/accel/error/accel_error.o 00:02:20.243 CC module/keyring/file/keyring.o 00:02:20.243 CC module/keyring/linux/keyring.o 00:02:20.243 CC module/accel/dsa/accel_dsa_rpc.o 00:02:20.243 CC module/accel/iaa/accel_iaa_rpc.o 00:02:20.243 CC module/keyring/file/keyring_rpc.o 00:02:20.243 CC module/keyring/linux/keyring_rpc.o 00:02:20.243 CC module/accel/error/accel_error_rpc.o 00:02:20.243 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:20.243 CC module/accel/ioat/accel_ioat.o 00:02:20.243 CC module/accel/ioat/accel_ioat_rpc.o 00:02:20.243 LIB libspdk_env_dpdk_rpc.a 00:02:20.500 SO libspdk_env_dpdk_rpc.so.6.0 00:02:20.500 SYMLINK libspdk_env_dpdk_rpc.so 00:02:20.500 LIB libspdk_keyring_linux.a 00:02:20.500 LIB libspdk_keyring_file.a 00:02:20.500 LIB libspdk_scheduler_gscheduler.a 00:02:20.500 LIB libspdk_scheduler_dpdk_governor.a 00:02:20.500 SO libspdk_keyring_linux.so.1.0 00:02:20.500 SO libspdk_scheduler_gscheduler.so.4.0 00:02:20.500 SO libspdk_keyring_file.so.1.0 00:02:20.500 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:20.500 LIB libspdk_accel_error.a 00:02:20.500 LIB libspdk_accel_ioat.a 00:02:20.500 LIB libspdk_scheduler_dynamic.a 00:02:20.500 LIB libspdk_accel_iaa.a 00:02:20.500 SO libspdk_accel_error.so.2.0 00:02:20.500 SYMLINK libspdk_scheduler_gscheduler.so 00:02:20.500 SO libspdk_scheduler_dynamic.so.4.0 00:02:20.500 SO libspdk_accel_ioat.so.6.0 00:02:20.500 SYMLINK libspdk_keyring_linux.so 00:02:20.500 SYMLINK libspdk_keyring_file.so 00:02:20.500 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:20.500 SO libspdk_accel_iaa.so.3.0 00:02:20.500 SYMLINK libspdk_accel_error.so 00:02:20.500 SYMLINK libspdk_scheduler_dynamic.so 00:02:20.500 SYMLINK libspdk_accel_ioat.so 00:02:20.500 LIB libspdk_accel_dsa.a 00:02:20.500 LIB libspdk_blob_bdev.a 00:02:20.500 SYMLINK libspdk_accel_iaa.so 00:02:20.758 SO libspdk_blob_bdev.so.11.0 00:02:20.758 SO libspdk_accel_dsa.so.5.0 00:02:20.758 SYMLINK libspdk_blob_bdev.so 00:02:20.758 SYMLINK libspdk_accel_dsa.so 00:02:21.016 CC module/blobfs/bdev/blobfs_bdev.o 00:02:21.016 CC module/bdev/lvol/vbdev_lvol.o 00:02:21.016 CC module/bdev/error/vbdev_error.o 00:02:21.016 CC module/bdev/gpt/gpt.o 00:02:21.016 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:21.016 CC module/bdev/delay/vbdev_delay.o 00:02:21.016 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:21.016 CC module/bdev/error/vbdev_error_rpc.o 00:02:21.016 CC module/bdev/gpt/vbdev_gpt.o 00:02:21.016 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:21.016 CC module/bdev/aio/bdev_aio.o 00:02:21.016 CC module/bdev/aio/bdev_aio_rpc.o 00:02:21.016 CC module/bdev/passthru/vbdev_passthru.o 00:02:21.016 CC module/bdev/null/bdev_null.o 00:02:21.016 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:21.016 CC module/bdev/malloc/bdev_malloc.o 00:02:21.016 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:21.016 CC module/bdev/ftl/bdev_ftl.o 00:02:21.016 CC module/bdev/null/bdev_null_rpc.o 00:02:21.016 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:21.016 CC module/bdev/raid/bdev_raid.o 00:02:21.016 CC module/bdev/nvme/bdev_nvme.o 00:02:21.016 CC module/bdev/raid/bdev_raid_rpc.o 00:02:21.016 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:21.016 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:21.016 CC module/bdev/raid/bdev_raid_sb.o 00:02:21.016 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:21.016 CC module/bdev/split/vbdev_split.o 00:02:21.016 CC module/bdev/nvme/nvme_rpc.o 00:02:21.016 CC module/bdev/raid/raid0.o 00:02:21.016 CC module/bdev/raid/raid1.o 00:02:21.016 CC module/bdev/nvme/bdev_mdns_client.o 00:02:21.016 CC module/bdev/split/vbdev_split_rpc.o 00:02:21.016 CC module/bdev/nvme/vbdev_opal.o 00:02:21.016 CC module/bdev/raid/concat.o 00:02:21.016 CC module/bdev/iscsi/bdev_iscsi.o 00:02:21.016 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:21.016 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:21.016 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:21.016 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:21.016 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:21.016 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:21.275 LIB libspdk_sock_posix.a 00:02:21.275 LIB libspdk_blobfs_bdev.a 00:02:21.275 LIB libspdk_bdev_null.a 00:02:21.275 SO libspdk_sock_posix.so.6.0 00:02:21.275 SO libspdk_blobfs_bdev.so.6.0 00:02:21.275 SO libspdk_bdev_null.so.6.0 00:02:21.533 SYMLINK libspdk_bdev_null.so 00:02:21.533 LIB libspdk_bdev_split.a 00:02:21.533 LIB libspdk_bdev_gpt.a 00:02:21.533 SYMLINK libspdk_blobfs_bdev.so 00:02:21.533 SO libspdk_bdev_split.so.6.0 00:02:21.533 SYMLINK libspdk_sock_posix.so 00:02:21.533 SO libspdk_bdev_gpt.so.6.0 00:02:21.533 LIB libspdk_bdev_error.a 00:02:21.533 SYMLINK libspdk_bdev_split.so 00:02:21.533 SO libspdk_bdev_error.so.6.0 00:02:21.533 LIB libspdk_bdev_passthru.a 00:02:21.533 SYMLINK libspdk_bdev_gpt.so 00:02:21.533 LIB libspdk_bdev_ftl.a 00:02:21.533 SO libspdk_bdev_passthru.so.6.0 00:02:21.533 LIB libspdk_bdev_zone_block.a 00:02:21.533 LIB libspdk_bdev_delay.a 00:02:21.533 LIB libspdk_bdev_aio.a 00:02:21.533 SO libspdk_bdev_ftl.so.6.0 00:02:21.533 SYMLINK libspdk_bdev_error.so 00:02:21.533 LIB libspdk_bdev_malloc.a 00:02:21.533 LIB libspdk_bdev_iscsi.a 00:02:21.533 SO libspdk_bdev_zone_block.so.6.0 00:02:21.533 SO libspdk_bdev_aio.so.6.0 00:02:21.533 SO libspdk_bdev_delay.so.6.0 00:02:21.533 SO libspdk_bdev_malloc.so.6.0 00:02:21.533 SO libspdk_bdev_iscsi.so.6.0 00:02:21.533 SYMLINK libspdk_bdev_passthru.so 00:02:21.533 SYMLINK libspdk_bdev_ftl.so 00:02:21.791 LIB libspdk_bdev_virtio.a 00:02:21.791 SYMLINK libspdk_bdev_zone_block.so 00:02:21.791 SYMLINK libspdk_bdev_aio.so 00:02:21.791 SYMLINK libspdk_bdev_delay.so 00:02:21.791 SYMLINK libspdk_bdev_iscsi.so 00:02:21.791 SYMLINK libspdk_bdev_malloc.so 00:02:21.791 SO libspdk_bdev_virtio.so.6.0 00:02:21.791 SYMLINK libspdk_bdev_virtio.so 00:02:21.791 LIB libspdk_bdev_lvol.a 00:02:21.791 SO libspdk_bdev_lvol.so.6.0 00:02:22.049 SYMLINK libspdk_bdev_lvol.so 00:02:22.613 LIB libspdk_bdev_raid.a 00:02:22.613 SO libspdk_bdev_raid.so.6.0 00:02:22.613 SYMLINK libspdk_bdev_raid.so 00:02:23.987 LIB libspdk_bdev_nvme.a 00:02:23.987 SO libspdk_bdev_nvme.so.7.0 00:02:24.245 SYMLINK libspdk_bdev_nvme.so 00:02:24.502 CC module/event/subsystems/vmd/vmd.o 00:02:24.502 CC module/event/subsystems/iobuf/iobuf.o 00:02:24.502 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:24.502 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:24.502 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:24.502 CC module/event/subsystems/keyring/keyring.o 00:02:24.502 CC module/event/subsystems/scheduler/scheduler.o 00:02:24.502 CC module/event/subsystems/sock/sock.o 00:02:24.502 LIB libspdk_event_keyring.a 00:02:24.502 LIB libspdk_event_vhost_blk.a 00:02:24.502 LIB libspdk_event_scheduler.a 00:02:24.502 LIB libspdk_event_vmd.a 00:02:24.502 LIB libspdk_event_sock.a 00:02:24.760 LIB libspdk_event_iobuf.a 00:02:24.760 SO libspdk_event_keyring.so.1.0 00:02:24.760 SO libspdk_event_vhost_blk.so.3.0 00:02:24.760 SO libspdk_event_scheduler.so.4.0 00:02:24.760 SO libspdk_event_sock.so.5.0 00:02:24.760 SO libspdk_event_vmd.so.6.0 00:02:24.760 SO libspdk_event_iobuf.so.3.0 00:02:24.760 SYMLINK libspdk_event_keyring.so 00:02:24.760 SYMLINK libspdk_event_vhost_blk.so 00:02:24.760 SYMLINK libspdk_event_scheduler.so 00:02:24.760 SYMLINK libspdk_event_sock.so 00:02:24.760 SYMLINK libspdk_event_vmd.so 00:02:24.760 SYMLINK libspdk_event_iobuf.so 00:02:25.018 CC module/event/subsystems/accel/accel.o 00:02:25.018 LIB libspdk_event_accel.a 00:02:25.018 SO libspdk_event_accel.so.6.0 00:02:25.018 SYMLINK libspdk_event_accel.so 00:02:25.276 CC module/event/subsystems/bdev/bdev.o 00:02:25.534 LIB libspdk_event_bdev.a 00:02:25.534 SO libspdk_event_bdev.so.6.0 00:02:25.534 SYMLINK libspdk_event_bdev.so 00:02:25.792 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:25.792 CC module/event/subsystems/ublk/ublk.o 00:02:25.792 CC module/event/subsystems/nbd/nbd.o 00:02:25.792 CC module/event/subsystems/scsi/scsi.o 00:02:25.792 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:25.792 LIB libspdk_event_ublk.a 00:02:25.792 LIB libspdk_event_nbd.a 00:02:25.792 LIB libspdk_event_scsi.a 00:02:25.792 SO libspdk_event_nbd.so.6.0 00:02:25.792 SO libspdk_event_ublk.so.3.0 00:02:25.792 SO libspdk_event_scsi.so.6.0 00:02:26.051 SYMLINK libspdk_event_nbd.so 00:02:26.051 SYMLINK libspdk_event_ublk.so 00:02:26.051 SYMLINK libspdk_event_scsi.so 00:02:26.051 LIB libspdk_event_nvmf.a 00:02:26.051 SO libspdk_event_nvmf.so.6.0 00:02:26.051 SYMLINK libspdk_event_nvmf.so 00:02:26.051 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:26.051 CC module/event/subsystems/iscsi/iscsi.o 00:02:26.309 LIB libspdk_event_vhost_scsi.a 00:02:26.309 LIB libspdk_event_iscsi.a 00:02:26.309 SO libspdk_event_vhost_scsi.so.3.0 00:02:26.309 SO libspdk_event_iscsi.so.6.0 00:02:26.309 SYMLINK libspdk_event_vhost_scsi.so 00:02:26.309 SYMLINK libspdk_event_iscsi.so 00:02:26.567 SO libspdk.so.6.0 00:02:26.567 SYMLINK libspdk.so 00:02:26.567 TEST_HEADER include/spdk/accel.h 00:02:26.567 TEST_HEADER include/spdk/accel_module.h 00:02:26.567 CC app/trace_record/trace_record.o 00:02:26.567 CXX app/trace/trace.o 00:02:26.567 TEST_HEADER include/spdk/assert.h 00:02:26.567 TEST_HEADER include/spdk/barrier.h 00:02:26.567 TEST_HEADER include/spdk/base64.h 00:02:26.567 TEST_HEADER include/spdk/bdev.h 00:02:26.567 TEST_HEADER include/spdk/bdev_module.h 00:02:26.567 CC test/rpc_client/rpc_client_test.o 00:02:26.567 TEST_HEADER include/spdk/bdev_zone.h 00:02:26.567 TEST_HEADER include/spdk/bit_array.h 00:02:26.567 CC app/spdk_nvme_perf/perf.o 00:02:26.567 CC app/spdk_lspci/spdk_lspci.o 00:02:26.567 TEST_HEADER include/spdk/bit_pool.h 00:02:26.567 TEST_HEADER include/spdk/blob_bdev.h 00:02:26.567 CC app/spdk_top/spdk_top.o 00:02:26.567 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:26.567 CC app/spdk_nvme_identify/identify.o 00:02:26.567 TEST_HEADER include/spdk/blobfs.h 00:02:26.567 CC app/spdk_nvme_discover/discovery_aer.o 00:02:26.567 TEST_HEADER include/spdk/blob.h 00:02:26.567 TEST_HEADER include/spdk/conf.h 00:02:26.567 TEST_HEADER include/spdk/config.h 00:02:26.567 TEST_HEADER include/spdk/cpuset.h 00:02:26.567 TEST_HEADER include/spdk/crc32.h 00:02:26.567 TEST_HEADER include/spdk/crc16.h 00:02:26.567 TEST_HEADER include/spdk/crc64.h 00:02:26.567 TEST_HEADER include/spdk/dif.h 00:02:26.567 TEST_HEADER include/spdk/dma.h 00:02:26.567 TEST_HEADER include/spdk/endian.h 00:02:26.567 TEST_HEADER include/spdk/env_dpdk.h 00:02:26.567 TEST_HEADER include/spdk/env.h 00:02:26.567 TEST_HEADER include/spdk/event.h 00:02:26.567 TEST_HEADER include/spdk/fd_group.h 00:02:26.567 TEST_HEADER include/spdk/fd.h 00:02:26.567 TEST_HEADER include/spdk/file.h 00:02:26.567 TEST_HEADER include/spdk/ftl.h 00:02:26.567 TEST_HEADER include/spdk/gpt_spec.h 00:02:26.567 TEST_HEADER include/spdk/hexlify.h 00:02:26.567 TEST_HEADER include/spdk/histogram_data.h 00:02:26.567 TEST_HEADER include/spdk/idxd.h 00:02:26.567 TEST_HEADER include/spdk/idxd_spec.h 00:02:26.567 TEST_HEADER include/spdk/init.h 00:02:26.567 TEST_HEADER include/spdk/ioat.h 00:02:26.567 TEST_HEADER include/spdk/ioat_spec.h 00:02:26.567 TEST_HEADER include/spdk/iscsi_spec.h 00:02:26.567 TEST_HEADER include/spdk/json.h 00:02:26.567 TEST_HEADER include/spdk/jsonrpc.h 00:02:26.567 TEST_HEADER include/spdk/keyring.h 00:02:26.567 TEST_HEADER include/spdk/likely.h 00:02:26.567 TEST_HEADER include/spdk/keyring_module.h 00:02:26.567 TEST_HEADER include/spdk/lvol.h 00:02:26.567 TEST_HEADER include/spdk/log.h 00:02:26.567 TEST_HEADER include/spdk/memory.h 00:02:26.567 TEST_HEADER include/spdk/mmio.h 00:02:26.567 TEST_HEADER include/spdk/nbd.h 00:02:26.567 TEST_HEADER include/spdk/net.h 00:02:26.567 TEST_HEADER include/spdk/notify.h 00:02:26.567 TEST_HEADER include/spdk/nvme_intel.h 00:02:26.567 TEST_HEADER include/spdk/nvme.h 00:02:26.567 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:26.567 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:26.567 TEST_HEADER include/spdk/nvme_spec.h 00:02:26.567 TEST_HEADER include/spdk/nvme_zns.h 00:02:26.567 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:26.567 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:26.567 TEST_HEADER include/spdk/nvmf.h 00:02:26.567 TEST_HEADER include/spdk/nvmf_spec.h 00:02:26.567 TEST_HEADER include/spdk/nvmf_transport.h 00:02:26.567 TEST_HEADER include/spdk/opal.h 00:02:26.567 TEST_HEADER include/spdk/opal_spec.h 00:02:26.567 TEST_HEADER include/spdk/pipe.h 00:02:26.567 TEST_HEADER include/spdk/pci_ids.h 00:02:26.567 TEST_HEADER include/spdk/queue.h 00:02:26.567 TEST_HEADER include/spdk/reduce.h 00:02:26.567 TEST_HEADER include/spdk/rpc.h 00:02:26.567 TEST_HEADER include/spdk/scheduler.h 00:02:26.567 TEST_HEADER include/spdk/scsi.h 00:02:26.567 TEST_HEADER include/spdk/scsi_spec.h 00:02:26.567 TEST_HEADER include/spdk/sock.h 00:02:26.567 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:26.567 TEST_HEADER include/spdk/stdinc.h 00:02:26.567 TEST_HEADER include/spdk/string.h 00:02:26.567 TEST_HEADER include/spdk/thread.h 00:02:26.567 TEST_HEADER include/spdk/trace.h 00:02:26.567 TEST_HEADER include/spdk/trace_parser.h 00:02:26.567 TEST_HEADER include/spdk/tree.h 00:02:26.567 TEST_HEADER include/spdk/ublk.h 00:02:26.567 TEST_HEADER include/spdk/util.h 00:02:26.567 TEST_HEADER include/spdk/uuid.h 00:02:26.567 TEST_HEADER include/spdk/version.h 00:02:26.567 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:26.567 TEST_HEADER include/spdk/vhost.h 00:02:26.567 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:26.567 TEST_HEADER include/spdk/vmd.h 00:02:26.567 TEST_HEADER include/spdk/xor.h 00:02:26.567 TEST_HEADER include/spdk/zipf.h 00:02:26.830 CXX test/cpp_headers/accel.o 00:02:26.831 CXX test/cpp_headers/accel_module.o 00:02:26.831 CXX test/cpp_headers/assert.o 00:02:26.831 CXX test/cpp_headers/barrier.o 00:02:26.831 CXX test/cpp_headers/base64.o 00:02:26.831 CXX test/cpp_headers/bdev.o 00:02:26.831 CXX test/cpp_headers/bdev_module.o 00:02:26.831 CXX test/cpp_headers/bdev_zone.o 00:02:26.831 CXX test/cpp_headers/bit_array.o 00:02:26.831 CXX test/cpp_headers/bit_pool.o 00:02:26.831 CXX test/cpp_headers/blob_bdev.o 00:02:26.831 CXX test/cpp_headers/blobfs_bdev.o 00:02:26.831 CXX test/cpp_headers/blobfs.o 00:02:26.831 CXX test/cpp_headers/blob.o 00:02:26.831 CXX test/cpp_headers/conf.o 00:02:26.831 CC app/nvmf_tgt/nvmf_main.o 00:02:26.831 CC app/iscsi_tgt/iscsi_tgt.o 00:02:26.831 CXX test/cpp_headers/config.o 00:02:26.831 CXX test/cpp_headers/cpuset.o 00:02:26.831 CXX test/cpp_headers/crc16.o 00:02:26.831 CC app/spdk_dd/spdk_dd.o 00:02:26.831 CXX test/cpp_headers/crc32.o 00:02:26.831 CC app/spdk_tgt/spdk_tgt.o 00:02:26.831 CC examples/ioat/perf/perf.o 00:02:26.831 CC test/app/jsoncat/jsoncat.o 00:02:26.831 CC test/env/vtophys/vtophys.o 00:02:26.831 CC examples/ioat/verify/verify.o 00:02:26.831 CC examples/util/zipf/zipf.o 00:02:26.831 CC test/app/stub/stub.o 00:02:26.831 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:26.831 CC test/env/pci/pci_ut.o 00:02:26.831 CC test/env/memory/memory_ut.o 00:02:26.831 CC app/fio/nvme/fio_plugin.o 00:02:26.831 CC test/thread/poller_perf/poller_perf.o 00:02:26.831 CC test/app/histogram_perf/histogram_perf.o 00:02:26.831 CC test/dma/test_dma/test_dma.o 00:02:26.831 CC test/app/bdev_svc/bdev_svc.o 00:02:26.831 CC app/fio/bdev/fio_plugin.o 00:02:26.831 LINK spdk_lspci 00:02:26.831 CC test/env/mem_callbacks/mem_callbacks.o 00:02:27.098 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:27.098 LINK rpc_client_test 00:02:27.098 LINK spdk_nvme_discover 00:02:27.098 LINK jsoncat 00:02:27.098 LINK interrupt_tgt 00:02:27.098 LINK vtophys 00:02:27.098 LINK nvmf_tgt 00:02:27.098 LINK histogram_perf 00:02:27.098 LINK zipf 00:02:27.098 LINK poller_perf 00:02:27.098 CXX test/cpp_headers/crc64.o 00:02:27.098 CXX test/cpp_headers/dif.o 00:02:27.098 CXX test/cpp_headers/dma.o 00:02:27.098 LINK env_dpdk_post_init 00:02:27.098 CXX test/cpp_headers/endian.o 00:02:27.098 LINK iscsi_tgt 00:02:27.098 CXX test/cpp_headers/env_dpdk.o 00:02:27.098 CXX test/cpp_headers/env.o 00:02:27.098 CXX test/cpp_headers/event.o 00:02:27.098 CXX test/cpp_headers/fd_group.o 00:02:27.098 CXX test/cpp_headers/fd.o 00:02:27.098 CXX test/cpp_headers/file.o 00:02:27.098 LINK stub 00:02:27.098 CXX test/cpp_headers/ftl.o 00:02:27.364 CXX test/cpp_headers/hexlify.o 00:02:27.364 CXX test/cpp_headers/histogram_data.o 00:02:27.364 CXX test/cpp_headers/gpt_spec.o 00:02:27.364 CXX test/cpp_headers/idxd.o 00:02:27.364 LINK spdk_tgt 00:02:27.364 CXX test/cpp_headers/idxd_spec.o 00:02:27.364 LINK bdev_svc 00:02:27.364 LINK spdk_trace_record 00:02:27.364 CXX test/cpp_headers/init.o 00:02:27.364 LINK ioat_perf 00:02:27.364 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:27.364 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:27.364 LINK verify 00:02:27.364 CXX test/cpp_headers/ioat.o 00:02:27.364 CXX test/cpp_headers/ioat_spec.o 00:02:27.364 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:27.364 CXX test/cpp_headers/iscsi_spec.o 00:02:27.364 CXX test/cpp_headers/json.o 00:02:27.626 CXX test/cpp_headers/jsonrpc.o 00:02:27.626 CXX test/cpp_headers/keyring.o 00:02:27.626 CXX test/cpp_headers/keyring_module.o 00:02:27.626 CXX test/cpp_headers/likely.o 00:02:27.626 CXX test/cpp_headers/log.o 00:02:27.626 CXX test/cpp_headers/lvol.o 00:02:27.626 CXX test/cpp_headers/memory.o 00:02:27.626 CXX test/cpp_headers/mmio.o 00:02:27.626 LINK spdk_dd 00:02:27.626 CXX test/cpp_headers/nbd.o 00:02:27.626 CXX test/cpp_headers/net.o 00:02:27.626 LINK spdk_trace 00:02:27.626 CXX test/cpp_headers/notify.o 00:02:27.626 CXX test/cpp_headers/nvme.o 00:02:27.626 CXX test/cpp_headers/nvme_intel.o 00:02:27.626 CXX test/cpp_headers/nvme_ocssd.o 00:02:27.626 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:27.626 CXX test/cpp_headers/nvme_spec.o 00:02:27.626 CXX test/cpp_headers/nvme_zns.o 00:02:27.626 CXX test/cpp_headers/nvmf_cmd.o 00:02:27.626 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:27.626 CXX test/cpp_headers/nvmf.o 00:02:27.626 CXX test/cpp_headers/nvmf_spec.o 00:02:27.626 LINK test_dma 00:02:27.626 CXX test/cpp_headers/nvmf_transport.o 00:02:27.890 LINK pci_ut 00:02:27.890 CXX test/cpp_headers/opal.o 00:02:27.890 CC examples/sock/hello_world/hello_sock.o 00:02:27.890 CC examples/vmd/led/led.o 00:02:27.890 CC examples/vmd/lsvmd/lsvmd.o 00:02:27.890 CXX test/cpp_headers/opal_spec.o 00:02:27.890 CC test/event/event_perf/event_perf.o 00:02:27.890 CC test/event/reactor/reactor.o 00:02:27.890 CC test/event/reactor_perf/reactor_perf.o 00:02:27.890 CXX test/cpp_headers/pci_ids.o 00:02:27.890 CC examples/idxd/perf/perf.o 00:02:27.890 CC examples/thread/thread/thread_ex.o 00:02:27.890 CC test/event/app_repeat/app_repeat.o 00:02:27.890 CXX test/cpp_headers/pipe.o 00:02:27.890 CXX test/cpp_headers/queue.o 00:02:27.890 CXX test/cpp_headers/reduce.o 00:02:27.890 LINK nvme_fuzz 00:02:27.890 CXX test/cpp_headers/rpc.o 00:02:27.890 CXX test/cpp_headers/scheduler.o 00:02:27.890 CXX test/cpp_headers/scsi.o 00:02:27.890 CXX test/cpp_headers/scsi_spec.o 00:02:27.890 CXX test/cpp_headers/sock.o 00:02:27.890 CXX test/cpp_headers/stdinc.o 00:02:28.155 CXX test/cpp_headers/string.o 00:02:28.155 CC test/event/scheduler/scheduler.o 00:02:28.155 CXX test/cpp_headers/thread.o 00:02:28.155 CXX test/cpp_headers/trace.o 00:02:28.155 CXX test/cpp_headers/trace_parser.o 00:02:28.155 CXX test/cpp_headers/tree.o 00:02:28.155 CXX test/cpp_headers/ublk.o 00:02:28.155 LINK spdk_bdev 00:02:28.155 CXX test/cpp_headers/util.o 00:02:28.155 CXX test/cpp_headers/uuid.o 00:02:28.155 CXX test/cpp_headers/version.o 00:02:28.155 LINK lsvmd 00:02:28.155 CXX test/cpp_headers/vfio_user_pci.o 00:02:28.155 CXX test/cpp_headers/vfio_user_spec.o 00:02:28.155 LINK reactor 00:02:28.155 CXX test/cpp_headers/vhost.o 00:02:28.155 LINK led 00:02:28.155 LINK reactor_perf 00:02:28.155 LINK spdk_nvme 00:02:28.155 LINK mem_callbacks 00:02:28.155 LINK event_perf 00:02:28.155 CXX test/cpp_headers/vmd.o 00:02:28.155 CC app/vhost/vhost.o 00:02:28.155 CXX test/cpp_headers/xor.o 00:02:28.155 CXX test/cpp_headers/zipf.o 00:02:28.155 LINK app_repeat 00:02:28.414 LINK hello_sock 00:02:28.414 LINK thread 00:02:28.414 CC test/accel/dif/dif.o 00:02:28.414 LINK vhost_fuzz 00:02:28.414 CC test/nvme/aer/aer.o 00:02:28.414 CC test/nvme/e2edp/nvme_dp.o 00:02:28.414 CC test/nvme/overhead/overhead.o 00:02:28.414 CC test/nvme/reset/reset.o 00:02:28.414 CC test/nvme/sgl/sgl.o 00:02:28.414 CC test/blobfs/mkfs/mkfs.o 00:02:28.414 CC test/nvme/startup/startup.o 00:02:28.414 CC test/nvme/err_injection/err_injection.o 00:02:28.414 CC test/nvme/reserve/reserve.o 00:02:28.414 CC test/nvme/simple_copy/simple_copy.o 00:02:28.414 CC test/nvme/connect_stress/connect_stress.o 00:02:28.414 CC test/nvme/boot_partition/boot_partition.o 00:02:28.414 LINK scheduler 00:02:28.674 CC test/nvme/compliance/nvme_compliance.o 00:02:28.674 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:28.674 CC test/nvme/fdp/fdp.o 00:02:28.674 CC test/nvme/fused_ordering/fused_ordering.o 00:02:28.674 CC test/nvme/cuse/cuse.o 00:02:28.674 CC test/lvol/esnap/esnap.o 00:02:28.674 LINK vhost 00:02:28.674 LINK idxd_perf 00:02:28.674 LINK spdk_top 00:02:28.674 LINK startup 00:02:28.674 LINK spdk_nvme_identify 00:02:28.674 LINK spdk_nvme_perf 00:02:28.674 LINK boot_partition 00:02:28.674 LINK err_injection 00:02:28.674 LINK mkfs 00:02:28.932 CC examples/nvme/reconnect/reconnect.o 00:02:28.932 CC examples/nvme/hello_world/hello_world.o 00:02:28.932 CC examples/nvme/abort/abort.o 00:02:28.932 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:28.932 CC examples/nvme/arbitration/arbitration.o 00:02:28.932 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:28.932 CC examples/nvme/hotplug/hotplug.o 00:02:28.932 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:28.932 LINK fused_ordering 00:02:28.932 LINK reserve 00:02:28.932 LINK connect_stress 00:02:28.932 LINK reset 00:02:28.932 LINK doorbell_aers 00:02:28.932 LINK simple_copy 00:02:28.932 LINK sgl 00:02:28.932 LINK nvme_dp 00:02:28.932 CC examples/accel/perf/accel_perf.o 00:02:28.932 LINK overhead 00:02:28.932 CC examples/blob/cli/blobcli.o 00:02:29.190 CC examples/blob/hello_world/hello_blob.o 00:02:29.190 LINK memory_ut 00:02:29.190 LINK fdp 00:02:29.190 LINK aer 00:02:29.190 LINK pmr_persistence 00:02:29.190 LINK nvme_compliance 00:02:29.190 LINK cmb_copy 00:02:29.190 LINK dif 00:02:29.190 LINK hotplug 00:02:29.190 LINK hello_world 00:02:29.449 LINK arbitration 00:02:29.449 LINK hello_blob 00:02:29.449 LINK reconnect 00:02:29.449 LINK abort 00:02:29.752 LINK nvme_manage 00:02:29.752 LINK accel_perf 00:02:29.752 CC test/bdev/bdevio/bdevio.o 00:02:29.752 LINK blobcli 00:02:30.030 CC examples/bdev/hello_world/hello_bdev.o 00:02:30.030 CC examples/bdev/bdevperf/bdevperf.o 00:02:30.031 LINK bdevio 00:02:30.289 LINK hello_bdev 00:02:30.289 LINK iscsi_fuzz 00:02:30.289 LINK cuse 00:02:30.855 LINK bdevperf 00:02:31.422 CC examples/nvmf/nvmf/nvmf.o 00:02:31.680 LINK nvmf 00:02:35.873 LINK esnap 00:02:35.873 00:02:35.873 real 1m15.451s 00:02:35.873 user 11m17.802s 00:02:35.873 sys 2m24.906s 00:02:35.873 05:54:46 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:35.873 05:54:46 make -- common/autotest_common.sh@10 -- $ set +x 00:02:35.873 ************************************ 00:02:35.873 END TEST make 00:02:35.873 ************************************ 00:02:35.873 05:54:46 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:35.873 05:54:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:35.873 05:54:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:35.873 05:54:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.873 05:54:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:35.873 05:54:46 -- pm/common@44 -- $ pid=4107796 00:02:35.873 05:54:46 -- pm/common@50 -- $ kill -TERM 4107796 00:02:35.873 05:54:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.873 05:54:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:35.873 05:54:46 -- pm/common@44 -- $ pid=4107798 00:02:35.873 05:54:46 -- pm/common@50 -- $ kill -TERM 4107798 00:02:35.873 05:54:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.873 05:54:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:35.873 05:54:46 -- pm/common@44 -- $ pid=4107800 00:02:35.873 05:54:46 -- pm/common@50 -- $ kill -TERM 4107800 00:02:35.873 05:54:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.873 05:54:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:35.873 05:54:46 -- pm/common@44 -- $ pid=4107825 00:02:35.873 05:54:46 -- pm/common@50 -- $ sudo -E kill -TERM 4107825 00:02:35.873 05:54:46 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:35.873 05:54:46 -- nvmf/common.sh@7 -- # uname -s 00:02:35.873 05:54:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:35.873 05:54:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:35.873 05:54:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:35.873 05:54:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:35.873 05:54:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:35.873 05:54:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:35.873 05:54:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:35.873 05:54:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:35.874 05:54:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:35.874 05:54:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:35.874 05:54:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:35.874 05:54:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:35.874 05:54:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:35.874 05:54:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:35.874 05:54:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:35.874 05:54:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:35.874 05:54:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:35.874 05:54:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:35.874 05:54:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:35.874 05:54:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:35.874 05:54:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.874 05:54:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.874 05:54:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.874 05:54:46 -- paths/export.sh@5 -- # export PATH 00:02:35.874 05:54:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.874 05:54:46 -- nvmf/common.sh@47 -- # : 0 00:02:35.874 05:54:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:35.874 05:54:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:35.874 05:54:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:35.874 05:54:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:35.874 05:54:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:35.874 05:54:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:35.874 05:54:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:35.874 05:54:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:35.874 05:54:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:35.874 05:54:46 -- spdk/autotest.sh@32 -- # uname -s 00:02:35.874 05:54:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:35.874 05:54:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:35.874 05:54:46 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:35.874 05:54:46 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:35.874 05:54:46 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:35.874 05:54:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:35.874 05:54:47 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:35.874 05:54:47 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:35.874 05:54:47 -- spdk/autotest.sh@48 -- # udevadm_pid=4166557 00:02:35.874 05:54:47 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:35.874 05:54:47 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:35.874 05:54:47 -- pm/common@17 -- # local monitor 00:02:35.874 05:54:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.874 05:54:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.874 05:54:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.874 05:54:47 -- pm/common@21 -- # date +%s 00:02:35.874 05:54:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.874 05:54:47 -- pm/common@21 -- # date +%s 00:02:35.874 05:54:47 -- pm/common@25 -- # sleep 1 00:02:35.874 05:54:47 -- pm/common@21 -- # date +%s 00:02:35.874 05:54:47 -- pm/common@21 -- # date +%s 00:02:35.874 05:54:47 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721966087 00:02:35.874 05:54:47 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721966087 00:02:35.874 05:54:47 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721966087 00:02:35.874 05:54:47 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721966087 00:02:35.874 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721966087_collect-vmstat.pm.log 00:02:35.874 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721966087_collect-cpu-load.pm.log 00:02:35.874 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721966087_collect-cpu-temp.pm.log 00:02:35.874 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721966087_collect-bmc-pm.bmc.pm.log 00:02:36.809 05:54:48 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:36.809 05:54:48 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:36.809 05:54:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:36.810 05:54:48 -- common/autotest_common.sh@10 -- # set +x 00:02:36.810 05:54:48 -- spdk/autotest.sh@59 -- # create_test_list 00:02:36.810 05:54:48 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:36.810 05:54:48 -- common/autotest_common.sh@10 -- # set +x 00:02:36.810 05:54:48 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:36.810 05:54:48 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:36.810 05:54:48 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:36.810 05:54:48 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:36.810 05:54:48 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:36.810 05:54:48 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:36.810 05:54:48 -- common/autotest_common.sh@1455 -- # uname 00:02:36.810 05:54:48 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:36.810 05:54:48 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:36.810 05:54:48 -- common/autotest_common.sh@1475 -- # uname 00:02:36.810 05:54:48 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:36.810 05:54:48 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:36.810 05:54:48 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:36.810 05:54:48 -- spdk/autotest.sh@72 -- # hash lcov 00:02:36.810 05:54:48 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:36.810 05:54:48 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:36.810 --rc lcov_branch_coverage=1 00:02:36.810 --rc lcov_function_coverage=1 00:02:36.810 --rc genhtml_branch_coverage=1 00:02:36.810 --rc genhtml_function_coverage=1 00:02:36.810 --rc genhtml_legend=1 00:02:36.810 --rc geninfo_all_blocks=1 00:02:36.810 ' 00:02:36.810 05:54:48 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:36.810 --rc lcov_branch_coverage=1 00:02:36.810 --rc lcov_function_coverage=1 00:02:36.810 --rc genhtml_branch_coverage=1 00:02:36.810 --rc genhtml_function_coverage=1 00:02:36.810 --rc genhtml_legend=1 00:02:36.810 --rc geninfo_all_blocks=1 00:02:36.810 ' 00:02:36.810 05:54:48 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:36.810 --rc lcov_branch_coverage=1 00:02:36.810 --rc lcov_function_coverage=1 00:02:36.810 --rc genhtml_branch_coverage=1 00:02:36.810 --rc genhtml_function_coverage=1 00:02:36.810 --rc genhtml_legend=1 00:02:36.810 --rc geninfo_all_blocks=1 00:02:36.810 --no-external' 00:02:36.810 05:54:48 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:36.810 --rc lcov_branch_coverage=1 00:02:36.810 --rc lcov_function_coverage=1 00:02:36.810 --rc genhtml_branch_coverage=1 00:02:36.810 --rc genhtml_function_coverage=1 00:02:36.810 --rc genhtml_legend=1 00:02:36.810 --rc geninfo_all_blocks=1 00:02:36.810 --no-external' 00:02:36.810 05:54:48 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:36.810 lcov: LCOV version 1.14 00:02:36.810 05:54:48 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:54.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:54.895 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:07.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:07.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:07.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:07.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:07.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:07.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:07.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:07.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:07.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:07.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:07.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:07.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:07.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:07.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:07.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:07.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:07.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:07.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:07.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:07.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:07.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:07.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:07.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:07.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:07.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:07.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:07.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:07.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:07.099 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:07.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:07.100 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:07.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:07.100 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:07.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:07.100 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:07.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:07.100 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:07.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:07.100 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:07.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:07.100 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:07.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:07.100 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:07.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:07.100 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:07.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:07.100 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:07.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:07.100 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:07.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:07.100 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:07.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:07.100 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:09.671 05:55:20 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:09.671 05:55:20 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:09.671 05:55:20 -- common/autotest_common.sh@10 -- # set +x 00:03:09.671 05:55:20 -- spdk/autotest.sh@91 -- # rm -f 00:03:09.671 05:55:20 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:10.605 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:10.605 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:10.606 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:10.606 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:10.606 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:10.606 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:10.606 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:10.606 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:10.606 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:10.606 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:10.606 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:10.606 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:10.865 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:10.865 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:10.865 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:10.865 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:10.865 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:10.865 05:55:22 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:10.865 05:55:22 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:10.865 05:55:22 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:10.865 05:55:22 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:10.865 05:55:22 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:10.865 05:55:22 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:10.865 05:55:22 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:10.865 05:55:22 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:10.865 05:55:22 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:10.865 05:55:22 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:10.865 05:55:22 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:10.865 05:55:22 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:10.865 05:55:22 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:10.865 05:55:22 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:10.865 05:55:22 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:10.865 No valid GPT data, bailing 00:03:10.865 05:55:22 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:10.865 05:55:22 -- scripts/common.sh@391 -- # pt= 00:03:10.865 05:55:22 -- scripts/common.sh@392 -- # return 1 00:03:10.865 05:55:22 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:10.865 1+0 records in 00:03:10.865 1+0 records out 00:03:10.865 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0017626 s, 595 MB/s 00:03:10.865 05:55:22 -- spdk/autotest.sh@118 -- # sync 00:03:10.865 05:55:22 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:10.865 05:55:22 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:10.865 05:55:22 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:12.768 05:55:24 -- spdk/autotest.sh@124 -- # uname -s 00:03:12.768 05:55:24 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:12.769 05:55:24 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:12.769 05:55:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:12.769 05:55:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:12.769 05:55:24 -- common/autotest_common.sh@10 -- # set +x 00:03:12.769 ************************************ 00:03:12.769 START TEST setup.sh 00:03:12.769 ************************************ 00:03:12.769 05:55:24 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:13.027 * Looking for test storage... 00:03:13.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:13.027 05:55:24 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:13.027 05:55:24 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:13.027 05:55:24 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:13.027 05:55:24 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:13.027 05:55:24 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:13.027 05:55:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:13.027 ************************************ 00:03:13.027 START TEST acl 00:03:13.027 ************************************ 00:03:13.027 05:55:24 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:13.027 * Looking for test storage... 00:03:13.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:13.027 05:55:24 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:13.027 05:55:24 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:13.027 05:55:24 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:13.027 05:55:24 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:13.027 05:55:24 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:13.027 05:55:24 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:13.027 05:55:24 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:13.027 05:55:24 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:13.027 05:55:24 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:13.027 05:55:24 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:13.027 05:55:24 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:13.027 05:55:24 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:13.027 05:55:24 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:13.027 05:55:24 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:13.027 05:55:24 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:13.027 05:55:24 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.403 05:55:25 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:14.403 05:55:25 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:14.403 05:55:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:14.403 05:55:25 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:14.403 05:55:25 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.403 05:55:25 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:15.779 Hugepages 00:03:15.779 node hugesize free / total 00:03:15.779 05:55:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:15.779 05:55:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:15.779 05:55:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.779 05:55:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:15.779 05:55:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:15.779 05:55:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.779 05:55:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:15.779 05:55:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:15.779 05:55:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.779 00:03:15.779 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:15.779 05:55:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:15.779 05:55:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:15.779 05:55:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.779 05:55:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:15.779 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.779 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:15.780 05:55:26 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:15.780 05:55:26 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:15.780 05:55:26 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:15.780 05:55:26 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:15.780 ************************************ 00:03:15.780 START TEST denied 00:03:15.780 ************************************ 00:03:15.780 05:55:27 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:15.780 05:55:27 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:15.780 05:55:27 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:15.780 05:55:27 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.780 05:55:27 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:15.780 05:55:27 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:17.682 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:17.682 05:55:28 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:17.682 05:55:28 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:17.682 05:55:28 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:17.682 05:55:28 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:17.682 05:55:28 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:17.682 05:55:28 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:17.682 05:55:28 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:17.682 05:55:28 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:17.682 05:55:28 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:17.682 05:55:28 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:20.215 00:03:20.215 real 0m3.952s 00:03:20.215 user 0m1.166s 00:03:20.215 sys 0m1.866s 00:03:20.215 05:55:30 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:20.215 05:55:30 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:20.215 ************************************ 00:03:20.215 END TEST denied 00:03:20.215 ************************************ 00:03:20.215 05:55:30 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:20.215 05:55:30 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:20.215 05:55:30 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:20.215 05:55:30 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:20.215 ************************************ 00:03:20.215 START TEST allowed 00:03:20.215 ************************************ 00:03:20.215 05:55:31 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:20.215 05:55:31 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:20.215 05:55:31 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:20.215 05:55:31 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:20.215 05:55:31 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.215 05:55:31 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:22.117 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:22.117 05:55:33 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:22.117 05:55:33 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:22.117 05:55:33 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:22.117 05:55:33 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:22.117 05:55:33 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.020 00:03:24.020 real 0m3.884s 00:03:24.020 user 0m0.977s 00:03:24.020 sys 0m1.728s 00:03:24.020 05:55:34 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:24.020 05:55:34 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:24.020 ************************************ 00:03:24.020 END TEST allowed 00:03:24.020 ************************************ 00:03:24.020 00:03:24.020 real 0m10.768s 00:03:24.020 user 0m3.298s 00:03:24.020 sys 0m5.444s 00:03:24.020 05:55:34 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:24.020 05:55:34 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:24.020 ************************************ 00:03:24.020 END TEST acl 00:03:24.020 ************************************ 00:03:24.020 05:55:34 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:24.020 05:55:34 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:24.020 05:55:34 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:24.020 05:55:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:24.020 ************************************ 00:03:24.020 START TEST hugepages 00:03:24.020 ************************************ 00:03:24.020 05:55:34 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:24.020 * Looking for test storage... 00:03:24.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43465652 kB' 'MemAvailable: 46969724 kB' 'Buffers: 2704 kB' 'Cached: 10518344 kB' 'SwapCached: 0 kB' 'Active: 7512832 kB' 'Inactive: 3506192 kB' 'Active(anon): 7117336 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501144 kB' 'Mapped: 200640 kB' 'Shmem: 6619360 kB' 'KReclaimable: 192448 kB' 'Slab: 558604 kB' 'SReclaimable: 192448 kB' 'SUnreclaim: 366156 kB' 'KernelStack: 12864 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562308 kB' 'Committed_AS: 8204628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.020 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.021 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:24.022 05:55:35 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:24.022 05:55:35 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:24.022 05:55:35 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:24.022 05:55:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:24.022 ************************************ 00:03:24.022 START TEST default_setup 00:03:24.022 ************************************ 00:03:24.022 05:55:35 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:24.022 05:55:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:24.022 05:55:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:24.022 05:55:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:24.022 05:55:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:24.022 05:55:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:24.022 05:55:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:24.022 05:55:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.022 05:55:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:24.022 05:55:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:24.022 05:55:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:24.022 05:55:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.022 05:55:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.022 05:55:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.022 05:55:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.022 05:55:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.022 05:55:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:24.022 05:55:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:24.022 05:55:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:24.022 05:55:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:24.022 05:55:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:24.022 05:55:35 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.022 05:55:35 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:24.955 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:24.955 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:24.955 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:24.955 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:24.955 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:24.955 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:24.955 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:24.955 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:24.955 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:24.955 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:24.956 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:24.956 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:24.956 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:24.956 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:24.956 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:25.214 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:26.154 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45587732 kB' 'MemAvailable: 49091804 kB' 'Buffers: 2704 kB' 'Cached: 10518436 kB' 'SwapCached: 0 kB' 'Active: 7531996 kB' 'Inactive: 3506192 kB' 'Active(anon): 7136500 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520256 kB' 'Mapped: 200652 kB' 'Shmem: 6619452 kB' 'KReclaimable: 192448 kB' 'Slab: 558652 kB' 'SReclaimable: 192448 kB' 'SUnreclaim: 366204 kB' 'KernelStack: 12640 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8225736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.154 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.155 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45588016 kB' 'MemAvailable: 49092088 kB' 'Buffers: 2704 kB' 'Cached: 10518440 kB' 'SwapCached: 0 kB' 'Active: 7532016 kB' 'Inactive: 3506192 kB' 'Active(anon): 7136520 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520268 kB' 'Mapped: 200688 kB' 'Shmem: 6619456 kB' 'KReclaimable: 192448 kB' 'Slab: 558684 kB' 'SReclaimable: 192448 kB' 'SUnreclaim: 366236 kB' 'KernelStack: 12720 kB' 'PageTables: 7920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8225756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.156 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.157 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45588036 kB' 'MemAvailable: 49092108 kB' 'Buffers: 2704 kB' 'Cached: 10518456 kB' 'SwapCached: 0 kB' 'Active: 7531568 kB' 'Inactive: 3506192 kB' 'Active(anon): 7136072 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519820 kB' 'Mapped: 200688 kB' 'Shmem: 6619472 kB' 'KReclaimable: 192448 kB' 'Slab: 558688 kB' 'SReclaimable: 192448 kB' 'SUnreclaim: 366240 kB' 'KernelStack: 12768 kB' 'PageTables: 8044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8225776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.158 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:26.159 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:26.160 nr_hugepages=1024 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.160 resv_hugepages=0 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.160 surplus_hugepages=0 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.160 anon_hugepages=0 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45585268 kB' 'MemAvailable: 49089340 kB' 'Buffers: 2704 kB' 'Cached: 10518476 kB' 'SwapCached: 0 kB' 'Active: 7531476 kB' 'Inactive: 3506192 kB' 'Active(anon): 7135980 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519728 kB' 'Mapped: 200688 kB' 'Shmem: 6619492 kB' 'KReclaimable: 192448 kB' 'Slab: 558688 kB' 'SReclaimable: 192448 kB' 'SUnreclaim: 366240 kB' 'KernelStack: 12784 kB' 'PageTables: 8104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8225432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.160 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.161 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21282180 kB' 'MemUsed: 11594760 kB' 'SwapCached: 0 kB' 'Active: 5004236 kB' 'Inactive: 3357228 kB' 'Active(anon): 4732304 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8258080 kB' 'Mapped: 103172 kB' 'AnonPages: 106512 kB' 'Shmem: 4628920 kB' 'KernelStack: 6408 kB' 'PageTables: 3124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87616 kB' 'Slab: 300004 kB' 'SReclaimable: 87616 kB' 'SUnreclaim: 212388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.162 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:26.163 node0=1024 expecting 1024 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:26.163 00:03:26.163 real 0m2.323s 00:03:26.163 user 0m0.608s 00:03:26.163 sys 0m0.848s 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:26.163 05:55:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:26.163 ************************************ 00:03:26.163 END TEST default_setup 00:03:26.163 ************************************ 00:03:26.163 05:55:37 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:26.163 05:55:37 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:26.163 05:55:37 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:26.163 05:55:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:26.163 ************************************ 00:03:26.163 START TEST per_node_1G_alloc 00:03:26.163 ************************************ 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.163 05:55:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:27.097 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:27.097 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:27.097 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:27.097 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:27.097 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:27.097 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:27.097 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:27.097 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:27.097 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:27.359 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:27.359 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:27.359 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:27.359 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:27.359 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:27.359 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:27.359 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:27.359 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45587060 kB' 'MemAvailable: 49091132 kB' 'Buffers: 2704 kB' 'Cached: 10518556 kB' 'SwapCached: 0 kB' 'Active: 7531632 kB' 'Inactive: 3506192 kB' 'Active(anon): 7136136 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519856 kB' 'Mapped: 200708 kB' 'Shmem: 6619572 kB' 'KReclaimable: 192448 kB' 'Slab: 558828 kB' 'SReclaimable: 192448 kB' 'SUnreclaim: 366380 kB' 'KernelStack: 12832 kB' 'PageTables: 8156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8228608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.359 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.360 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45587060 kB' 'MemAvailable: 49091132 kB' 'Buffers: 2704 kB' 'Cached: 10518556 kB' 'SwapCached: 0 kB' 'Active: 7533000 kB' 'Inactive: 3506192 kB' 'Active(anon): 7137504 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521252 kB' 'Mapped: 200708 kB' 'Shmem: 6619572 kB' 'KReclaimable: 192448 kB' 'Slab: 558820 kB' 'SReclaimable: 192448 kB' 'SUnreclaim: 366372 kB' 'KernelStack: 12896 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8226008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.361 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45587464 kB' 'MemAvailable: 49091536 kB' 'Buffers: 2704 kB' 'Cached: 10518576 kB' 'SwapCached: 0 kB' 'Active: 7531604 kB' 'Inactive: 3506192 kB' 'Active(anon): 7136108 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519748 kB' 'Mapped: 200700 kB' 'Shmem: 6619592 kB' 'KReclaimable: 192448 kB' 'Slab: 558916 kB' 'SReclaimable: 192448 kB' 'SUnreclaim: 366468 kB' 'KernelStack: 12832 kB' 'PageTables: 8132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8226028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.362 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:27.363 nr_hugepages=1024 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.363 resv_hugepages=0 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.363 surplus_hugepages=0 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:27.363 anon_hugepages=0 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45587464 kB' 'MemAvailable: 49091536 kB' 'Buffers: 2704 kB' 'Cached: 10518600 kB' 'SwapCached: 0 kB' 'Active: 7532784 kB' 'Inactive: 3506192 kB' 'Active(anon): 7137288 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520900 kB' 'Mapped: 201136 kB' 'Shmem: 6619616 kB' 'KReclaimable: 192448 kB' 'Slab: 558916 kB' 'SReclaimable: 192448 kB' 'SUnreclaim: 366468 kB' 'KernelStack: 12800 kB' 'PageTables: 8036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8228200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.363 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.628 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.629 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22329104 kB' 'MemUsed: 10547836 kB' 'SwapCached: 0 kB' 'Active: 5004384 kB' 'Inactive: 3357228 kB' 'Active(anon): 4732452 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8258092 kB' 'Mapped: 103608 kB' 'AnonPages: 106660 kB' 'Shmem: 4628932 kB' 'KernelStack: 6472 kB' 'PageTables: 3320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87616 kB' 'Slab: 300108 kB' 'SReclaimable: 87616 kB' 'SUnreclaim: 212492 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.630 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 23254580 kB' 'MemUsed: 4410192 kB' 'SwapCached: 0 kB' 'Active: 2532960 kB' 'Inactive: 148964 kB' 'Active(anon): 2409396 kB' 'Inactive(anon): 0 kB' 'Active(file): 123564 kB' 'Inactive(file): 148964 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2263252 kB' 'Mapped: 97680 kB' 'AnonPages: 418760 kB' 'Shmem: 1990724 kB' 'KernelStack: 6344 kB' 'PageTables: 4772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104832 kB' 'Slab: 258772 kB' 'SReclaimable: 104832 kB' 'SUnreclaim: 153940 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.631 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.632 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:27.633 node0=512 expecting 512 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:27.633 node1=512 expecting 512 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:27.633 00:03:27.633 real 0m1.295s 00:03:27.633 user 0m0.559s 00:03:27.633 sys 0m0.694s 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:27.633 05:55:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:27.633 ************************************ 00:03:27.633 END TEST per_node_1G_alloc 00:03:27.633 ************************************ 00:03:27.633 05:55:38 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:27.633 05:55:38 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:27.633 05:55:38 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:27.633 05:55:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:27.633 ************************************ 00:03:27.633 START TEST even_2G_alloc 00:03:27.633 ************************************ 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.633 05:55:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:28.576 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:28.576 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:28.576 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:28.576 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:28.576 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:28.576 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:28.576 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:28.576 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:28.576 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:28.576 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:28.576 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:28.576 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:28.576 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:28.576 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:28.576 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:28.576 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:28.576 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:28.840 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:28.840 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:28.840 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.840 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.840 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:28.840 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:28.840 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:28.840 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.840 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.840 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.840 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:28.840 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:28.840 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.840 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.840 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.840 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.840 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.840 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.840 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.840 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.840 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45567112 kB' 'MemAvailable: 49071184 kB' 'Buffers: 2704 kB' 'Cached: 10518692 kB' 'SwapCached: 0 kB' 'Active: 7531964 kB' 'Inactive: 3506192 kB' 'Active(anon): 7136468 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519888 kB' 'Mapped: 200764 kB' 'Shmem: 6619708 kB' 'KReclaimable: 192448 kB' 'Slab: 558904 kB' 'SReclaimable: 192448 kB' 'SUnreclaim: 366456 kB' 'KernelStack: 12768 kB' 'PageTables: 7900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8226252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:28.840 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.840 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.841 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45567548 kB' 'MemAvailable: 49071620 kB' 'Buffers: 2704 kB' 'Cached: 10518692 kB' 'SwapCached: 0 kB' 'Active: 7532380 kB' 'Inactive: 3506192 kB' 'Active(anon): 7136884 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520348 kB' 'Mapped: 200716 kB' 'Shmem: 6619708 kB' 'KReclaimable: 192448 kB' 'Slab: 558904 kB' 'SReclaimable: 192448 kB' 'SUnreclaim: 366456 kB' 'KernelStack: 12784 kB' 'PageTables: 7956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8226272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.842 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.843 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45567008 kB' 'MemAvailable: 49071080 kB' 'Buffers: 2704 kB' 'Cached: 10518712 kB' 'SwapCached: 0 kB' 'Active: 7531672 kB' 'Inactive: 3506192 kB' 'Active(anon): 7136176 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520048 kB' 'Mapped: 200716 kB' 'Shmem: 6619728 kB' 'KReclaimable: 192448 kB' 'Slab: 558924 kB' 'SReclaimable: 192448 kB' 'SUnreclaim: 366476 kB' 'KernelStack: 12784 kB' 'PageTables: 7844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8226292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.844 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.845 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:28.846 nr_hugepages=1024 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.846 resv_hugepages=0 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.846 surplus_hugepages=0 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:28.846 anon_hugepages=0 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45567008 kB' 'MemAvailable: 49071080 kB' 'Buffers: 2704 kB' 'Cached: 10518736 kB' 'SwapCached: 0 kB' 'Active: 7531668 kB' 'Inactive: 3506192 kB' 'Active(anon): 7136172 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520016 kB' 'Mapped: 200716 kB' 'Shmem: 6619752 kB' 'KReclaimable: 192448 kB' 'Slab: 558924 kB' 'SReclaimable: 192448 kB' 'SUnreclaim: 366476 kB' 'KernelStack: 12880 kB' 'PageTables: 7788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8226316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.846 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.847 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22314460 kB' 'MemUsed: 10562480 kB' 'SwapCached: 0 kB' 'Active: 5005268 kB' 'Inactive: 3357228 kB' 'Active(anon): 4733336 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8258148 kB' 'Mapped: 103172 kB' 'AnonPages: 107652 kB' 'Shmem: 4628988 kB' 'KernelStack: 6520 kB' 'PageTables: 3468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87584 kB' 'Slab: 300012 kB' 'SReclaimable: 87584 kB' 'SUnreclaim: 212428 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.848 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.849 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 23252548 kB' 'MemUsed: 4412224 kB' 'SwapCached: 0 kB' 'Active: 2526428 kB' 'Inactive: 148964 kB' 'Active(anon): 2402864 kB' 'Inactive(anon): 0 kB' 'Active(file): 123564 kB' 'Inactive(file): 148964 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2263292 kB' 'Mapped: 97544 kB' 'AnonPages: 412428 kB' 'Shmem: 1990764 kB' 'KernelStack: 6344 kB' 'PageTables: 4656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104832 kB' 'Slab: 258880 kB' 'SReclaimable: 104832 kB' 'SUnreclaim: 154048 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.850 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:28.851 node0=512 expecting 512 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:28.851 node1=512 expecting 512 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:28.851 00:03:28.851 real 0m1.368s 00:03:28.851 user 0m0.579s 00:03:28.851 sys 0m0.748s 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:28.851 05:55:40 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:28.851 ************************************ 00:03:28.851 END TEST even_2G_alloc 00:03:28.851 ************************************ 00:03:29.109 05:55:40 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:29.109 05:55:40 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:29.109 05:55:40 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:29.109 05:55:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:29.109 ************************************ 00:03:29.109 START TEST odd_alloc 00:03:29.109 ************************************ 00:03:29.109 05:55:40 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:29.109 05:55:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.110 05:55:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.044 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:30.044 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:30.044 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:30.044 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:30.044 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:30.044 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:30.044 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:30.044 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:30.044 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:30.044 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:30.044 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:30.044 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:30.044 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:30.044 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:30.044 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:30.044 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:30.044 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45565648 kB' 'MemAvailable: 49069704 kB' 'Buffers: 2704 kB' 'Cached: 10518820 kB' 'SwapCached: 0 kB' 'Active: 7529144 kB' 'Inactive: 3506192 kB' 'Active(anon): 7133648 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517048 kB' 'Mapped: 199896 kB' 'Shmem: 6619836 kB' 'KReclaimable: 192416 kB' 'Slab: 558584 kB' 'SReclaimable: 192416 kB' 'SUnreclaim: 366168 kB' 'KernelStack: 12800 kB' 'PageTables: 7728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 8211024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.044 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.045 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45565728 kB' 'MemAvailable: 49069784 kB' 'Buffers: 2704 kB' 'Cached: 10518828 kB' 'SwapCached: 0 kB' 'Active: 7528800 kB' 'Inactive: 3506192 kB' 'Active(anon): 7133304 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516688 kB' 'Mapped: 199880 kB' 'Shmem: 6619844 kB' 'KReclaimable: 192416 kB' 'Slab: 558588 kB' 'SReclaimable: 192416 kB' 'SUnreclaim: 366172 kB' 'KernelStack: 12816 kB' 'PageTables: 7688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 8211040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45565228 kB' 'MemAvailable: 49069284 kB' 'Buffers: 2704 kB' 'Cached: 10518844 kB' 'SwapCached: 0 kB' 'Active: 7528884 kB' 'Inactive: 3506192 kB' 'Active(anon): 7133388 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516748 kB' 'Mapped: 199880 kB' 'Shmem: 6619860 kB' 'KReclaimable: 192416 kB' 'Slab: 558628 kB' 'SReclaimable: 192416 kB' 'SUnreclaim: 366212 kB' 'KernelStack: 12800 kB' 'PageTables: 7664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 8211064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.312 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.313 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:30.314 nr_hugepages=1025 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.314 resv_hugepages=0 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.314 surplus_hugepages=0 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.314 anon_hugepages=0 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45565228 kB' 'MemAvailable: 49069284 kB' 'Buffers: 2704 kB' 'Cached: 10518844 kB' 'SwapCached: 0 kB' 'Active: 7528704 kB' 'Inactive: 3506192 kB' 'Active(anon): 7133208 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516552 kB' 'Mapped: 199880 kB' 'Shmem: 6619860 kB' 'KReclaimable: 192416 kB' 'Slab: 558628 kB' 'SReclaimable: 192416 kB' 'SUnreclaim: 366212 kB' 'KernelStack: 12832 kB' 'PageTables: 7776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 8211084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.314 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.315 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22316044 kB' 'MemUsed: 10560896 kB' 'SwapCached: 0 kB' 'Active: 5004416 kB' 'Inactive: 3357228 kB' 'Active(anon): 4732484 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8258296 kB' 'Mapped: 102568 kB' 'AnonPages: 106504 kB' 'Shmem: 4629136 kB' 'KernelStack: 6472 kB' 'PageTables: 3180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87584 kB' 'Slab: 299876 kB' 'SReclaimable: 87584 kB' 'SUnreclaim: 212292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.316 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.317 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 23248932 kB' 'MemUsed: 4415840 kB' 'SwapCached: 0 kB' 'Active: 2524620 kB' 'Inactive: 148964 kB' 'Active(anon): 2401056 kB' 'Inactive(anon): 0 kB' 'Active(file): 123564 kB' 'Inactive(file): 148964 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2263292 kB' 'Mapped: 97312 kB' 'AnonPages: 410324 kB' 'Shmem: 1990764 kB' 'KernelStack: 6360 kB' 'PageTables: 4596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104832 kB' 'Slab: 258752 kB' 'SReclaimable: 104832 kB' 'SUnreclaim: 153920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:30.319 node0=512 expecting 513 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:30.319 node1=513 expecting 512 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:30.319 00:03:30.319 real 0m1.307s 00:03:30.319 user 0m0.534s 00:03:30.319 sys 0m0.727s 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:30.319 05:55:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:30.319 ************************************ 00:03:30.319 END TEST odd_alloc 00:03:30.319 ************************************ 00:03:30.319 05:55:41 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:30.319 05:55:41 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:30.319 05:55:41 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:30.319 05:55:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:30.319 ************************************ 00:03:30.319 START TEST custom_alloc 00:03:30.319 ************************************ 00:03:30.319 05:55:41 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:30.319 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:30.319 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:30.319 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:30.319 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:30.319 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:30.319 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:30.319 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:30.319 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:30.319 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.319 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:30.319 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:30.319 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.319 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.319 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:30.319 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.319 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.319 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.319 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.319 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.320 05:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:31.699 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:31.699 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:31.699 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:31.699 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:31.699 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:31.699 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:31.699 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:31.699 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:31.699 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:31.699 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:31.699 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:31.699 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:31.699 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:31.699 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:31.700 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:31.700 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:31.700 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44516388 kB' 'MemAvailable: 48020444 kB' 'Buffers: 2704 kB' 'Cached: 10518952 kB' 'SwapCached: 0 kB' 'Active: 7530512 kB' 'Inactive: 3506192 kB' 'Active(anon): 7135016 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517528 kB' 'Mapped: 199956 kB' 'Shmem: 6619968 kB' 'KReclaimable: 192416 kB' 'Slab: 558224 kB' 'SReclaimable: 192416 kB' 'SUnreclaim: 365808 kB' 'KernelStack: 12880 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 8210912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.700 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44520940 kB' 'MemAvailable: 48024996 kB' 'Buffers: 2704 kB' 'Cached: 10518952 kB' 'SwapCached: 0 kB' 'Active: 7529464 kB' 'Inactive: 3506192 kB' 'Active(anon): 7133968 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517352 kB' 'Mapped: 199888 kB' 'Shmem: 6619968 kB' 'KReclaimable: 192416 kB' 'Slab: 558224 kB' 'SReclaimable: 192416 kB' 'SUnreclaim: 365808 kB' 'KernelStack: 12896 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 8211928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.701 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.702 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44519896 kB' 'MemAvailable: 48023952 kB' 'Buffers: 2704 kB' 'Cached: 10518972 kB' 'SwapCached: 0 kB' 'Active: 7530188 kB' 'Inactive: 3506192 kB' 'Active(anon): 7134692 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517988 kB' 'Mapped: 199888 kB' 'Shmem: 6619988 kB' 'KReclaimable: 192416 kB' 'Slab: 558224 kB' 'SReclaimable: 192416 kB' 'SUnreclaim: 365808 kB' 'KernelStack: 13072 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 8213316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.703 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.704 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:31.705 nr_hugepages=1536 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:31.705 resv_hugepages=0 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:31.705 surplus_hugepages=0 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:31.705 anon_hugepages=0 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.705 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44515940 kB' 'MemAvailable: 48019996 kB' 'Buffers: 2704 kB' 'Cached: 10518980 kB' 'SwapCached: 0 kB' 'Active: 7530616 kB' 'Inactive: 3506192 kB' 'Active(anon): 7135120 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518348 kB' 'Mapped: 199888 kB' 'Shmem: 6619996 kB' 'KReclaimable: 192416 kB' 'Slab: 558224 kB' 'SReclaimable: 192416 kB' 'SUnreclaim: 365808 kB' 'KernelStack: 13264 kB' 'PageTables: 9828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 8213628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196448 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.706 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.707 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22314212 kB' 'MemUsed: 10562728 kB' 'SwapCached: 0 kB' 'Active: 5005476 kB' 'Inactive: 3357228 kB' 'Active(anon): 4733544 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8258364 kB' 'Mapped: 102568 kB' 'AnonPages: 107520 kB' 'Shmem: 4629204 kB' 'KernelStack: 6776 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87584 kB' 'Slab: 299512 kB' 'SReclaimable: 87584 kB' 'SUnreclaim: 211928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.708 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.709 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 22199076 kB' 'MemUsed: 5465696 kB' 'SwapCached: 0 kB' 'Active: 2525956 kB' 'Inactive: 148964 kB' 'Active(anon): 2402392 kB' 'Inactive(anon): 0 kB' 'Active(file): 123564 kB' 'Inactive(file): 148964 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2263384 kB' 'Mapped: 97320 kB' 'AnonPages: 411640 kB' 'Shmem: 1990856 kB' 'KernelStack: 6344 kB' 'PageTables: 4532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104832 kB' 'Slab: 258712 kB' 'SReclaimable: 104832 kB' 'SUnreclaim: 153880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:31.710 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.710 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.710 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.710 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.710 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.710 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.710 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.710 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.710 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.710 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.710 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.710 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.710 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.711 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:31.712 node0=512 expecting 512 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:31.712 node1=1024 expecting 1024 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:31.712 00:03:31.712 real 0m1.387s 00:03:31.712 user 0m0.613s 00:03:31.712 sys 0m0.734s 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:31.712 05:55:42 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:31.712 ************************************ 00:03:31.712 END TEST custom_alloc 00:03:31.712 ************************************ 00:03:31.712 05:55:42 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:31.712 05:55:42 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:31.712 05:55:42 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:31.712 05:55:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:31.712 ************************************ 00:03:31.712 START TEST no_shrink_alloc 00:03:31.712 ************************************ 00:03:31.712 05:55:42 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:03:31.712 05:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:31.712 05:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:31.712 05:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:31.712 05:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:31.712 05:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:31.712 05:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:31.712 05:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:31.712 05:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:31.712 05:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:31.712 05:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:31.712 05:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:31.712 05:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:31.712 05:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:31.712 05:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:31.712 05:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:31.712 05:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:31.712 05:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:31.712 05:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:31.712 05:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:31.712 05:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:31.712 05:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.712 05:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:33.092 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:33.092 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:33.092 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:33.092 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:33.092 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:33.092 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:33.092 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:33.092 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:33.092 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:33.092 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:33.092 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:33.092 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:33.092 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:33.092 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:33.092 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:33.092 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:33.092 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45461964 kB' 'MemAvailable: 48966020 kB' 'Buffers: 2704 kB' 'Cached: 10519088 kB' 'SwapCached: 0 kB' 'Active: 7529824 kB' 'Inactive: 3506192 kB' 'Active(anon): 7134328 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517504 kB' 'Mapped: 199916 kB' 'Shmem: 6620104 kB' 'KReclaimable: 192416 kB' 'Slab: 558196 kB' 'SReclaimable: 192416 kB' 'SUnreclaim: 365780 kB' 'KernelStack: 12896 kB' 'PageTables: 7840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8211704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.092 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.093 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45462556 kB' 'MemAvailable: 48966612 kB' 'Buffers: 2704 kB' 'Cached: 10519088 kB' 'SwapCached: 0 kB' 'Active: 7529488 kB' 'Inactive: 3506192 kB' 'Active(anon): 7133992 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517160 kB' 'Mapped: 199908 kB' 'Shmem: 6620104 kB' 'KReclaimable: 192416 kB' 'Slab: 558196 kB' 'SReclaimable: 192416 kB' 'SUnreclaim: 365780 kB' 'KernelStack: 12896 kB' 'PageTables: 7836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8211720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.094 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.095 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45463252 kB' 'MemAvailable: 48967308 kB' 'Buffers: 2704 kB' 'Cached: 10519108 kB' 'SwapCached: 0 kB' 'Active: 7529480 kB' 'Inactive: 3506192 kB' 'Active(anon): 7133984 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517168 kB' 'Mapped: 199908 kB' 'Shmem: 6620124 kB' 'KReclaimable: 192416 kB' 'Slab: 558284 kB' 'SReclaimable: 192416 kB' 'SUnreclaim: 365868 kB' 'KernelStack: 12896 kB' 'PageTables: 7792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8211744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.096 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.097 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:33.098 nr_hugepages=1024 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:33.098 resv_hugepages=0 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:33.098 surplus_hugepages=0 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:33.098 anon_hugepages=0 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45463260 kB' 'MemAvailable: 48967316 kB' 'Buffers: 2704 kB' 'Cached: 10519128 kB' 'SwapCached: 0 kB' 'Active: 7529484 kB' 'Inactive: 3506192 kB' 'Active(anon): 7133988 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517104 kB' 'Mapped: 199908 kB' 'Shmem: 6620144 kB' 'KReclaimable: 192416 kB' 'Slab: 558284 kB' 'SReclaimable: 192416 kB' 'SUnreclaim: 365868 kB' 'KernelStack: 12896 kB' 'PageTables: 7788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8211764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.098 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.099 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21279104 kB' 'MemUsed: 11597836 kB' 'SwapCached: 0 kB' 'Active: 5003632 kB' 'Inactive: 3357228 kB' 'Active(anon): 4731700 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8258368 kB' 'Mapped: 102568 kB' 'AnonPages: 105716 kB' 'Shmem: 4629208 kB' 'KernelStack: 6472 kB' 'PageTables: 3132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87584 kB' 'Slab: 299492 kB' 'SReclaimable: 87584 kB' 'SUnreclaim: 211908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.100 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.102 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:33.361 node0=1024 expecting 1024 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.361 05:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:34.296 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:34.296 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:34.296 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:34.296 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:34.296 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:34.296 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:34.296 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:34.296 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:34.296 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:34.296 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:34.559 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:34.559 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:34.559 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:34.559 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:34.559 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:34.559 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:34.559 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:34.559 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45474168 kB' 'MemAvailable: 48978224 kB' 'Buffers: 2704 kB' 'Cached: 10519192 kB' 'SwapCached: 0 kB' 'Active: 7530192 kB' 'Inactive: 3506192 kB' 'Active(anon): 7134696 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517744 kB' 'Mapped: 199920 kB' 'Shmem: 6620208 kB' 'KReclaimable: 192416 kB' 'Slab: 558192 kB' 'SReclaimable: 192416 kB' 'SUnreclaim: 365776 kB' 'KernelStack: 12928 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8211936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.559 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.560 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45474168 kB' 'MemAvailable: 48978224 kB' 'Buffers: 2704 kB' 'Cached: 10519196 kB' 'SwapCached: 0 kB' 'Active: 7530024 kB' 'Inactive: 3506192 kB' 'Active(anon): 7134528 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517492 kB' 'Mapped: 199912 kB' 'Shmem: 6620212 kB' 'KReclaimable: 192416 kB' 'Slab: 558176 kB' 'SReclaimable: 192416 kB' 'SUnreclaim: 365760 kB' 'KernelStack: 12912 kB' 'PageTables: 7756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8211956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.561 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.562 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45474616 kB' 'MemAvailable: 48978672 kB' 'Buffers: 2704 kB' 'Cached: 10519212 kB' 'SwapCached: 0 kB' 'Active: 7529712 kB' 'Inactive: 3506192 kB' 'Active(anon): 7134216 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517196 kB' 'Mapped: 199912 kB' 'Shmem: 6620228 kB' 'KReclaimable: 192416 kB' 'Slab: 558208 kB' 'SReclaimable: 192416 kB' 'SUnreclaim: 365792 kB' 'KernelStack: 12944 kB' 'PageTables: 7776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8211976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.563 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.564 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:34.565 nr_hugepages=1024 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.565 resv_hugepages=0 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.565 surplus_hugepages=0 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.565 anon_hugepages=0 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45473860 kB' 'MemAvailable: 48977916 kB' 'Buffers: 2704 kB' 'Cached: 10519236 kB' 'SwapCached: 0 kB' 'Active: 7529716 kB' 'Inactive: 3506192 kB' 'Active(anon): 7134220 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517192 kB' 'Mapped: 199912 kB' 'Shmem: 6620252 kB' 'KReclaimable: 192416 kB' 'Slab: 558208 kB' 'SReclaimable: 192416 kB' 'SUnreclaim: 365792 kB' 'KernelStack: 12944 kB' 'PageTables: 7776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 8212000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 14886912 kB' 'DirectMap1G: 52428800 kB' 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.565 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.566 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.567 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.567 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.567 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.567 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.567 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.567 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.826 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21275528 kB' 'MemUsed: 11601412 kB' 'SwapCached: 0 kB' 'Active: 5004056 kB' 'Inactive: 3357228 kB' 'Active(anon): 4732124 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3357228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8258372 kB' 'Mapped: 102568 kB' 'AnonPages: 106052 kB' 'Shmem: 4629212 kB' 'KernelStack: 6488 kB' 'PageTables: 3164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87584 kB' 'Slab: 299464 kB' 'SReclaimable: 87584 kB' 'SUnreclaim: 211880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.827 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:34.828 node0=1024 expecting 1024 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:34.828 00:03:34.828 real 0m2.931s 00:03:34.828 user 0m1.238s 00:03:34.828 sys 0m1.602s 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:34.828 05:55:45 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:34.829 ************************************ 00:03:34.829 END TEST no_shrink_alloc 00:03:34.829 ************************************ 00:03:34.829 05:55:45 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:34.829 05:55:45 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:34.829 05:55:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:34.829 05:55:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.829 05:55:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:34.829 05:55:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.829 05:55:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:34.829 05:55:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:34.829 05:55:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.829 05:55:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:34.829 05:55:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.829 05:55:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:34.829 05:55:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:34.829 05:55:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:34.829 00:03:34.829 real 0m10.982s 00:03:34.829 user 0m4.294s 00:03:34.829 sys 0m5.580s 00:03:34.829 05:55:45 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:34.829 05:55:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:34.829 ************************************ 00:03:34.829 END TEST hugepages 00:03:34.829 ************************************ 00:03:34.829 05:55:45 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:34.829 05:55:45 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:34.829 05:55:45 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:34.829 05:55:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:34.829 ************************************ 00:03:34.829 START TEST driver 00:03:34.829 ************************************ 00:03:34.829 05:55:45 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:34.829 * Looking for test storage... 00:03:34.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:34.829 05:55:46 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:34.829 05:55:46 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:34.829 05:55:46 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.360 05:55:48 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:37.360 05:55:48 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:37.360 05:55:48 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:37.360 05:55:48 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:37.360 ************************************ 00:03:37.360 START TEST guess_driver 00:03:37.360 ************************************ 00:03:37.360 05:55:48 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:03:37.360 05:55:48 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:37.360 05:55:48 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:37.360 05:55:48 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:37.360 05:55:48 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:37.360 05:55:48 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:37.360 05:55:48 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:37.360 05:55:48 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:37.360 05:55:48 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:37.360 05:55:48 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:37.360 05:55:48 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:37.360 05:55:48 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:37.360 05:55:48 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:37.360 05:55:48 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:37.360 05:55:48 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:37.360 05:55:48 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:37.360 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:37.360 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:37.360 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:37.360 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:37.360 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:37.360 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:37.360 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:37.360 05:55:48 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:37.360 05:55:48 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:37.360 05:55:48 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:37.360 05:55:48 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:37.360 05:55:48 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:37.360 Looking for driver=vfio-pci 00:03:37.360 05:55:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.360 05:55:48 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:37.360 05:55:48 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.360 05:55:48 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:38.295 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.295 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.295 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.295 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.295 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.295 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.295 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.295 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.295 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.295 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.295 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.295 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.553 05:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.490 05:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.490 05:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:39.490 05:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.490 05:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:39.490 05:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:39.490 05:55:50 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:39.490 05:55:50 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.021 00:03:42.021 real 0m4.765s 00:03:42.021 user 0m1.078s 00:03:42.021 sys 0m1.822s 00:03:42.021 05:55:53 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:42.021 05:55:53 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:42.021 ************************************ 00:03:42.021 END TEST guess_driver 00:03:42.021 ************************************ 00:03:42.021 00:03:42.021 real 0m7.247s 00:03:42.021 user 0m1.590s 00:03:42.021 sys 0m2.771s 00:03:42.021 05:55:53 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:42.022 05:55:53 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:42.022 ************************************ 00:03:42.022 END TEST driver 00:03:42.022 ************************************ 00:03:42.022 05:55:53 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:42.022 05:55:53 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:42.022 05:55:53 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:42.022 05:55:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:42.022 ************************************ 00:03:42.022 START TEST devices 00:03:42.022 ************************************ 00:03:42.022 05:55:53 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:42.022 * Looking for test storage... 00:03:42.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:42.022 05:55:53 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:42.022 05:55:53 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:42.022 05:55:53 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:42.022 05:55:53 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:43.943 05:55:54 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:43.943 05:55:54 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:43.943 05:55:54 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:43.943 05:55:54 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:43.943 05:55:54 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:43.943 05:55:54 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:43.943 05:55:54 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:43.943 05:55:54 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:43.943 05:55:54 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:43.943 05:55:54 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:43.943 05:55:54 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:43.943 05:55:54 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:43.943 05:55:54 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:43.943 05:55:54 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:43.943 05:55:54 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:43.943 05:55:54 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:43.943 05:55:54 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:43.943 05:55:54 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:43.943 05:55:54 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:43.943 05:55:54 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:43.943 05:55:54 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:43.943 05:55:54 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:43.943 No valid GPT data, bailing 00:03:43.943 05:55:54 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:43.943 05:55:54 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:43.943 05:55:54 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:43.943 05:55:54 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:43.943 05:55:54 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:43.943 05:55:54 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:43.943 05:55:54 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:43.943 05:55:54 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:43.943 05:55:54 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:43.943 05:55:54 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:43.943 05:55:54 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:43.943 05:55:54 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:43.943 05:55:54 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:43.943 05:55:54 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.943 05:55:54 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.943 05:55:54 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:43.943 ************************************ 00:03:43.943 START TEST nvme_mount 00:03:43.943 ************************************ 00:03:43.943 05:55:54 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:03:43.943 05:55:54 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:43.943 05:55:54 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:43.943 05:55:54 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.943 05:55:54 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.943 05:55:54 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:43.943 05:55:54 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:43.943 05:55:54 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:43.943 05:55:54 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:43.943 05:55:54 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:43.943 05:55:54 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:43.943 05:55:54 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:43.943 05:55:54 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:43.943 05:55:54 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:43.943 05:55:54 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:43.943 05:55:54 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:43.943 05:55:54 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:43.943 05:55:54 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:43.943 05:55:54 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:43.943 05:55:54 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:44.882 Creating new GPT entries in memory. 00:03:44.882 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:44.882 other utilities. 00:03:44.882 05:55:55 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:44.882 05:55:55 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:44.882 05:55:55 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:44.882 05:55:55 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:44.882 05:55:55 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:45.820 Creating new GPT entries in memory. 00:03:45.820 The operation has completed successfully. 00:03:45.820 05:55:56 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:45.820 05:55:56 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:45.820 05:55:56 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 4186484 00:03:45.820 05:55:56 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.820 05:55:56 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:45.820 05:55:56 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.820 05:55:56 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:45.820 05:55:56 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:45.820 05:55:56 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.820 05:55:56 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:45.820 05:55:56 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:45.820 05:55:56 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:45.820 05:55:56 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.820 05:55:56 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:45.820 05:55:56 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:45.820 05:55:56 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:45.820 05:55:56 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:45.820 05:55:56 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:45.820 05:55:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.820 05:55:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:45.820 05:55:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:45.820 05:55:56 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.820 05:55:56 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:46.754 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.754 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:46.754 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:46.754 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.754 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.754 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.755 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.015 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:47.015 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:47.015 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:47.015 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:47.015 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:47.015 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:47.015 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:47.015 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:47.015 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:47.015 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:47.015 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:47.015 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:47.015 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:47.274 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:47.274 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:47.274 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:47.274 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:47.274 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:47.274 05:55:58 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:47.274 05:55:58 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:47.274 05:55:58 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:47.274 05:55:58 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:47.274 05:55:58 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:47.274 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:47.274 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:47.274 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:47.274 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:47.274 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:47.274 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:47.274 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:47.274 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:47.274 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:47.274 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.274 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:47.274 05:55:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:47.274 05:55:58 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.274 05:55:58 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.653 05:55:59 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:49.592 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.852 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:49.852 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:49.852 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:49.852 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:49.852 05:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.852 05:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:49.852 05:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:49.852 05:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:49.852 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:49.852 00:03:49.852 real 0m6.136s 00:03:49.852 user 0m1.389s 00:03:49.852 sys 0m2.319s 00:03:49.852 05:56:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:49.852 05:56:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:49.852 ************************************ 00:03:49.852 END TEST nvme_mount 00:03:49.852 ************************************ 00:03:49.852 05:56:01 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:49.852 05:56:01 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:49.852 05:56:01 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:49.852 05:56:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:49.852 ************************************ 00:03:49.852 START TEST dm_mount 00:03:49.852 ************************************ 00:03:49.852 05:56:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:03:49.852 05:56:01 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:49.852 05:56:01 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:49.852 05:56:01 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:49.852 05:56:01 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:49.852 05:56:01 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:49.852 05:56:01 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:49.852 05:56:01 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:49.852 05:56:01 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:49.852 05:56:01 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:49.852 05:56:01 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:49.852 05:56:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:49.852 05:56:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:49.852 05:56:01 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:49.852 05:56:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:49.852 05:56:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:49.852 05:56:01 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:49.852 05:56:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:49.852 05:56:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:49.852 05:56:01 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:49.852 05:56:01 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:49.852 05:56:01 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:50.790 Creating new GPT entries in memory. 00:03:50.790 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:50.790 other utilities. 00:03:50.790 05:56:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:50.790 05:56:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:50.790 05:56:02 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:50.790 05:56:02 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:50.790 05:56:02 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:52.170 Creating new GPT entries in memory. 00:03:52.170 The operation has completed successfully. 00:03:52.170 05:56:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:52.170 05:56:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:52.170 05:56:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:52.170 05:56:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:52.170 05:56:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:53.107 The operation has completed successfully. 00:03:53.107 05:56:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:53.107 05:56:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:53.107 05:56:04 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 4188866 00:03:53.107 05:56:04 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:53.107 05:56:04 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:53.107 05:56:04 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:53.107 05:56:04 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:53.107 05:56:04 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:53.107 05:56:04 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.108 05:56:04 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.043 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.301 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.301 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:54.301 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:54.301 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:54.301 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:54.301 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:54.301 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:54.301 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:54.301 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:54.301 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:54.301 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:54.301 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:54.301 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:54.301 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:54.301 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.301 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:54.301 05:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:54.301 05:56:05 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.301 05:56:05 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:55.677 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.677 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:55.677 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:55.677 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.677 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.677 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.677 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.677 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.677 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.677 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.677 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.677 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:55.678 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:55.678 00:03:55.678 real 0m5.761s 00:03:55.678 user 0m1.008s 00:03:55.678 sys 0m1.584s 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:55.678 05:56:06 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:55.678 ************************************ 00:03:55.678 END TEST dm_mount 00:03:55.678 ************************************ 00:03:55.678 05:56:06 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:55.678 05:56:06 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:55.678 05:56:06 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.678 05:56:06 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:55.678 05:56:06 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:55.678 05:56:06 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:55.678 05:56:06 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:55.937 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:55.937 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:55.937 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:55.937 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:55.937 05:56:07 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:55.937 05:56:07 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:55.938 05:56:07 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:55.938 05:56:07 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:55.938 05:56:07 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:55.938 05:56:07 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:55.938 05:56:07 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:55.938 00:03:55.938 real 0m13.822s 00:03:55.938 user 0m3.064s 00:03:55.938 sys 0m4.929s 00:03:55.938 05:56:07 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:55.938 05:56:07 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:55.938 ************************************ 00:03:55.938 END TEST devices 00:03:55.938 ************************************ 00:03:55.938 00:03:55.938 real 0m43.065s 00:03:55.938 user 0m12.348s 00:03:55.938 sys 0m18.883s 00:03:55.938 05:56:07 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:55.938 05:56:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:55.938 ************************************ 00:03:55.938 END TEST setup.sh 00:03:55.938 ************************************ 00:03:55.938 05:56:07 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:57.315 Hugepages 00:03:57.315 node hugesize free / total 00:03:57.315 node0 1048576kB 0 / 0 00:03:57.315 node0 2048kB 2048 / 2048 00:03:57.315 node1 1048576kB 0 / 0 00:03:57.315 node1 2048kB 0 / 0 00:03:57.315 00:03:57.315 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:57.315 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:57.315 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:57.315 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:57.315 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:57.315 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:57.315 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:57.315 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:57.315 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:57.315 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:57.315 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:57.315 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:57.315 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:57.315 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:57.315 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:57.315 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:57.315 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:57.315 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:57.315 05:56:08 -- spdk/autotest.sh@130 -- # uname -s 00:03:57.315 05:56:08 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:57.315 05:56:08 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:57.315 05:56:08 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:58.251 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:58.251 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:58.511 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:58.511 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:58.511 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:58.511 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:58.511 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:58.511 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:58.511 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:58.511 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:58.511 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:58.511 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:58.511 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:58.511 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:58.511 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:58.511 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:59.450 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:59.450 05:56:10 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:00.825 05:56:11 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:00.825 05:56:11 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:00.825 05:56:11 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:00.825 05:56:11 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:00.825 05:56:11 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:00.826 05:56:11 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:00.826 05:56:11 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:00.826 05:56:11 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:00.826 05:56:11 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:00.826 05:56:11 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:00.826 05:56:11 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:00.826 05:56:11 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:01.768 Waiting for block devices as requested 00:04:01.768 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:01.768 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:01.768 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:02.027 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:02.027 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:02.027 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:02.027 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:02.287 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:02.287 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:02.287 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:02.287 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:02.546 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:02.546 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:02.546 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:02.805 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:02.805 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:02.805 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:03.063 05:56:14 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:03.063 05:56:14 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:03.063 05:56:14 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:03.063 05:56:14 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:04:03.063 05:56:14 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:03.063 05:56:14 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:03.063 05:56:14 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:03.063 05:56:14 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:03.063 05:56:14 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:03.063 05:56:14 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:03.063 05:56:14 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:03.063 05:56:14 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:03.063 05:56:14 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:03.063 05:56:14 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:04:03.063 05:56:14 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:03.063 05:56:14 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:03.063 05:56:14 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:03.063 05:56:14 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:03.063 05:56:14 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:03.063 05:56:14 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:03.063 05:56:14 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:03.063 05:56:14 -- common/autotest_common.sh@1557 -- # continue 00:04:03.063 05:56:14 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:03.063 05:56:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:03.063 05:56:14 -- common/autotest_common.sh@10 -- # set +x 00:04:03.063 05:56:14 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:03.063 05:56:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:03.063 05:56:14 -- common/autotest_common.sh@10 -- # set +x 00:04:03.063 05:56:14 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:04.442 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:04.442 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:04.442 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:04.442 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:04.442 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:04.442 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:04.442 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:04.442 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:04.442 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:04.442 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:04.442 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:04.442 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:04.442 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:04.442 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:04.442 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:04.442 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:05.378 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:05.378 05:56:16 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:05.378 05:56:16 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:05.378 05:56:16 -- common/autotest_common.sh@10 -- # set +x 00:04:05.378 05:56:16 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:05.379 05:56:16 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:05.379 05:56:16 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:05.379 05:56:16 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:05.379 05:56:16 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:05.379 05:56:16 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:05.379 05:56:16 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:05.379 05:56:16 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:05.379 05:56:16 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:05.379 05:56:16 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:05.379 05:56:16 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:05.379 05:56:16 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:05.379 05:56:16 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:05.379 05:56:16 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:05.379 05:56:16 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:05.379 05:56:16 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:05.379 05:56:16 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:05.379 05:56:16 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:05.379 05:56:16 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:04:05.379 05:56:16 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:04:05.379 05:56:16 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=4194051 00:04:05.379 05:56:16 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:05.379 05:56:16 -- common/autotest_common.sh@1598 -- # waitforlisten 4194051 00:04:05.379 05:56:16 -- common/autotest_common.sh@831 -- # '[' -z 4194051 ']' 00:04:05.379 05:56:16 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.379 05:56:16 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:05.379 05:56:16 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.379 05:56:16 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:05.379 05:56:16 -- common/autotest_common.sh@10 -- # set +x 00:04:05.639 [2024-07-26 05:56:16.759317] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:05.639 [2024-07-26 05:56:16.759470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4194051 ] 00:04:05.639 EAL: No free 2048 kB hugepages reported on node 1 00:04:05.639 [2024-07-26 05:56:16.890228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.899 [2024-07-26 05:56:17.144510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.835 05:56:18 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:06.835 05:56:18 -- common/autotest_common.sh@864 -- # return 0 00:04:06.835 05:56:18 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:06.835 05:56:18 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:06.835 05:56:18 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:10.125 nvme0n1 00:04:10.125 05:56:21 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:10.125 [2024-07-26 05:56:21.382312] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:10.125 [2024-07-26 05:56:21.382379] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:10.126 request: 00:04:10.126 { 00:04:10.126 "nvme_ctrlr_name": "nvme0", 00:04:10.126 "password": "test", 00:04:10.126 "method": "bdev_nvme_opal_revert", 00:04:10.126 "req_id": 1 00:04:10.126 } 00:04:10.126 Got JSON-RPC error response 00:04:10.126 response: 00:04:10.126 { 00:04:10.126 "code": -32603, 00:04:10.126 "message": "Internal error" 00:04:10.126 } 00:04:10.126 05:56:21 -- common/autotest_common.sh@1604 -- # true 00:04:10.126 05:56:21 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:10.126 05:56:21 -- common/autotest_common.sh@1608 -- # killprocess 4194051 00:04:10.126 05:56:21 -- common/autotest_common.sh@950 -- # '[' -z 4194051 ']' 00:04:10.126 05:56:21 -- common/autotest_common.sh@954 -- # kill -0 4194051 00:04:10.126 05:56:21 -- common/autotest_common.sh@955 -- # uname 00:04:10.126 05:56:21 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:10.126 05:56:21 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4194051 00:04:10.126 05:56:21 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:10.126 05:56:21 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:10.126 05:56:21 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4194051' 00:04:10.126 killing process with pid 4194051 00:04:10.126 05:56:21 -- common/autotest_common.sh@969 -- # kill 4194051 00:04:10.126 05:56:21 -- common/autotest_common.sh@974 -- # wait 4194051 00:04:14.326 05:56:25 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:14.326 05:56:25 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:14.326 05:56:25 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:14.326 05:56:25 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:14.326 05:56:25 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:14.326 05:56:25 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:14.326 05:56:25 -- common/autotest_common.sh@10 -- # set +x 00:04:14.326 05:56:25 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:14.326 05:56:25 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:14.326 05:56:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.326 05:56:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.326 05:56:25 -- common/autotest_common.sh@10 -- # set +x 00:04:14.326 ************************************ 00:04:14.326 START TEST env 00:04:14.326 ************************************ 00:04:14.326 05:56:25 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:14.326 * Looking for test storage... 00:04:14.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:14.326 05:56:25 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:14.326 05:56:25 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.326 05:56:25 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.326 05:56:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.326 ************************************ 00:04:14.326 START TEST env_memory 00:04:14.326 ************************************ 00:04:14.326 05:56:25 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:14.326 00:04:14.326 00:04:14.326 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.326 http://cunit.sourceforge.net/ 00:04:14.326 00:04:14.326 00:04:14.326 Suite: memory 00:04:14.326 Test: alloc and free memory map ...[2024-07-26 05:56:25.322658] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:14.326 passed 00:04:14.326 Test: mem map translation ...[2024-07-26 05:56:25.375186] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:14.326 [2024-07-26 05:56:25.375239] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:14.326 [2024-07-26 05:56:25.375327] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:14.326 [2024-07-26 05:56:25.375364] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:14.326 passed 00:04:14.326 Test: mem map registration ...[2024-07-26 05:56:25.458372] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:14.326 [2024-07-26 05:56:25.458424] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:14.326 passed 00:04:14.326 Test: mem map adjacent registrations ...passed 00:04:14.326 00:04:14.326 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.327 suites 1 1 n/a 0 0 00:04:14.327 tests 4 4 4 0 0 00:04:14.327 asserts 152 152 152 0 n/a 00:04:14.327 00:04:14.327 Elapsed time = 0.290 seconds 00:04:14.327 00:04:14.327 real 0m0.309s 00:04:14.327 user 0m0.295s 00:04:14.327 sys 0m0.012s 00:04:14.327 05:56:25 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:14.327 05:56:25 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:14.327 ************************************ 00:04:14.327 END TEST env_memory 00:04:14.327 ************************************ 00:04:14.327 05:56:25 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:14.327 05:56:25 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.327 05:56:25 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.327 05:56:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.327 ************************************ 00:04:14.327 START TEST env_vtophys 00:04:14.327 ************************************ 00:04:14.327 05:56:25 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:14.588 EAL: lib.eal log level changed from notice to debug 00:04:14.588 EAL: Detected lcore 0 as core 0 on socket 0 00:04:14.588 EAL: Detected lcore 1 as core 1 on socket 0 00:04:14.588 EAL: Detected lcore 2 as core 2 on socket 0 00:04:14.588 EAL: Detected lcore 3 as core 3 on socket 0 00:04:14.588 EAL: Detected lcore 4 as core 4 on socket 0 00:04:14.588 EAL: Detected lcore 5 as core 5 on socket 0 00:04:14.588 EAL: Detected lcore 6 as core 8 on socket 0 00:04:14.588 EAL: Detected lcore 7 as core 9 on socket 0 00:04:14.588 EAL: Detected lcore 8 as core 10 on socket 0 00:04:14.588 EAL: Detected lcore 9 as core 11 on socket 0 00:04:14.588 EAL: Detected lcore 10 as core 12 on socket 0 00:04:14.588 EAL: Detected lcore 11 as core 13 on socket 0 00:04:14.588 EAL: Detected lcore 12 as core 0 on socket 1 00:04:14.588 EAL: Detected lcore 13 as core 1 on socket 1 00:04:14.588 EAL: Detected lcore 14 as core 2 on socket 1 00:04:14.588 EAL: Detected lcore 15 as core 3 on socket 1 00:04:14.588 EAL: Detected lcore 16 as core 4 on socket 1 00:04:14.588 EAL: Detected lcore 17 as core 5 on socket 1 00:04:14.588 EAL: Detected lcore 18 as core 8 on socket 1 00:04:14.588 EAL: Detected lcore 19 as core 9 on socket 1 00:04:14.588 EAL: Detected lcore 20 as core 10 on socket 1 00:04:14.588 EAL: Detected lcore 21 as core 11 on socket 1 00:04:14.588 EAL: Detected lcore 22 as core 12 on socket 1 00:04:14.588 EAL: Detected lcore 23 as core 13 on socket 1 00:04:14.588 EAL: Detected lcore 24 as core 0 on socket 0 00:04:14.588 EAL: Detected lcore 25 as core 1 on socket 0 00:04:14.588 EAL: Detected lcore 26 as core 2 on socket 0 00:04:14.588 EAL: Detected lcore 27 as core 3 on socket 0 00:04:14.588 EAL: Detected lcore 28 as core 4 on socket 0 00:04:14.588 EAL: Detected lcore 29 as core 5 on socket 0 00:04:14.588 EAL: Detected lcore 30 as core 8 on socket 0 00:04:14.588 EAL: Detected lcore 31 as core 9 on socket 0 00:04:14.588 EAL: Detected lcore 32 as core 10 on socket 0 00:04:14.588 EAL: Detected lcore 33 as core 11 on socket 0 00:04:14.588 EAL: Detected lcore 34 as core 12 on socket 0 00:04:14.588 EAL: Detected lcore 35 as core 13 on socket 0 00:04:14.588 EAL: Detected lcore 36 as core 0 on socket 1 00:04:14.588 EAL: Detected lcore 37 as core 1 on socket 1 00:04:14.588 EAL: Detected lcore 38 as core 2 on socket 1 00:04:14.588 EAL: Detected lcore 39 as core 3 on socket 1 00:04:14.588 EAL: Detected lcore 40 as core 4 on socket 1 00:04:14.588 EAL: Detected lcore 41 as core 5 on socket 1 00:04:14.588 EAL: Detected lcore 42 as core 8 on socket 1 00:04:14.588 EAL: Detected lcore 43 as core 9 on socket 1 00:04:14.588 EAL: Detected lcore 44 as core 10 on socket 1 00:04:14.588 EAL: Detected lcore 45 as core 11 on socket 1 00:04:14.588 EAL: Detected lcore 46 as core 12 on socket 1 00:04:14.588 EAL: Detected lcore 47 as core 13 on socket 1 00:04:14.588 EAL: Maximum logical cores by configuration: 128 00:04:14.588 EAL: Detected CPU lcores: 48 00:04:14.588 EAL: Detected NUMA nodes: 2 00:04:14.588 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:14.588 EAL: Detected shared linkage of DPDK 00:04:14.588 EAL: No shared files mode enabled, IPC will be disabled 00:04:14.588 EAL: Bus pci wants IOVA as 'DC' 00:04:14.588 EAL: Buses did not request a specific IOVA mode. 00:04:14.588 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:14.588 EAL: Selected IOVA mode 'VA' 00:04:14.588 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.588 EAL: Probing VFIO support... 00:04:14.588 EAL: IOMMU type 1 (Type 1) is supported 00:04:14.588 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:14.588 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:14.588 EAL: VFIO support initialized 00:04:14.588 EAL: Ask a virtual area of 0x2e000 bytes 00:04:14.588 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:14.588 EAL: Setting up physically contiguous memory... 00:04:14.588 EAL: Setting maximum number of open files to 524288 00:04:14.588 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:14.588 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:14.588 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:14.588 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.588 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:14.588 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.588 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.588 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:14.588 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:14.588 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.588 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:14.588 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.588 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.588 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:14.588 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:14.588 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.588 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:14.588 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.588 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.588 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:14.588 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:14.588 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.588 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:14.588 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.588 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.588 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:14.588 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:14.588 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:14.588 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.588 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:14.588 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:14.588 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.588 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:14.588 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:14.588 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.588 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:14.588 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:14.588 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.588 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:14.588 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:14.588 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.588 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:14.588 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:14.588 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.588 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:14.588 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:14.588 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.588 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:14.588 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:14.588 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.588 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:14.588 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:14.588 EAL: Hugepages will be freed exactly as allocated. 00:04:14.588 EAL: No shared files mode enabled, IPC is disabled 00:04:14.588 EAL: No shared files mode enabled, IPC is disabled 00:04:14.588 EAL: TSC frequency is ~2700000 KHz 00:04:14.588 EAL: Main lcore 0 is ready (tid=7f5520965a40;cpuset=[0]) 00:04:14.588 EAL: Trying to obtain current memory policy. 00:04:14.588 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.588 EAL: Restoring previous memory policy: 0 00:04:14.588 EAL: request: mp_malloc_sync 00:04:14.588 EAL: No shared files mode enabled, IPC is disabled 00:04:14.588 EAL: Heap on socket 0 was expanded by 2MB 00:04:14.588 EAL: No shared files mode enabled, IPC is disabled 00:04:14.588 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:14.588 EAL: Mem event callback 'spdk:(nil)' registered 00:04:14.588 00:04:14.588 00:04:14.588 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.588 http://cunit.sourceforge.net/ 00:04:14.588 00:04:14.588 00:04:14.588 Suite: components_suite 00:04:15.156 Test: vtophys_malloc_test ...passed 00:04:15.156 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:15.156 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.156 EAL: Restoring previous memory policy: 4 00:04:15.156 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.156 EAL: request: mp_malloc_sync 00:04:15.156 EAL: No shared files mode enabled, IPC is disabled 00:04:15.156 EAL: Heap on socket 0 was expanded by 4MB 00:04:15.156 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.156 EAL: request: mp_malloc_sync 00:04:15.156 EAL: No shared files mode enabled, IPC is disabled 00:04:15.156 EAL: Heap on socket 0 was shrunk by 4MB 00:04:15.156 EAL: Trying to obtain current memory policy. 00:04:15.156 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.156 EAL: Restoring previous memory policy: 4 00:04:15.156 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.156 EAL: request: mp_malloc_sync 00:04:15.156 EAL: No shared files mode enabled, IPC is disabled 00:04:15.156 EAL: Heap on socket 0 was expanded by 6MB 00:04:15.156 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.156 EAL: request: mp_malloc_sync 00:04:15.156 EAL: No shared files mode enabled, IPC is disabled 00:04:15.156 EAL: Heap on socket 0 was shrunk by 6MB 00:04:15.156 EAL: Trying to obtain current memory policy. 00:04:15.156 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.156 EAL: Restoring previous memory policy: 4 00:04:15.156 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.156 EAL: request: mp_malloc_sync 00:04:15.156 EAL: No shared files mode enabled, IPC is disabled 00:04:15.156 EAL: Heap on socket 0 was expanded by 10MB 00:04:15.156 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.156 EAL: request: mp_malloc_sync 00:04:15.156 EAL: No shared files mode enabled, IPC is disabled 00:04:15.156 EAL: Heap on socket 0 was shrunk by 10MB 00:04:15.156 EAL: Trying to obtain current memory policy. 00:04:15.156 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.156 EAL: Restoring previous memory policy: 4 00:04:15.156 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.156 EAL: request: mp_malloc_sync 00:04:15.156 EAL: No shared files mode enabled, IPC is disabled 00:04:15.156 EAL: Heap on socket 0 was expanded by 18MB 00:04:15.156 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.156 EAL: request: mp_malloc_sync 00:04:15.156 EAL: No shared files mode enabled, IPC is disabled 00:04:15.156 EAL: Heap on socket 0 was shrunk by 18MB 00:04:15.156 EAL: Trying to obtain current memory policy. 00:04:15.156 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.156 EAL: Restoring previous memory policy: 4 00:04:15.156 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.156 EAL: request: mp_malloc_sync 00:04:15.156 EAL: No shared files mode enabled, IPC is disabled 00:04:15.156 EAL: Heap on socket 0 was expanded by 34MB 00:04:15.156 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.156 EAL: request: mp_malloc_sync 00:04:15.156 EAL: No shared files mode enabled, IPC is disabled 00:04:15.156 EAL: Heap on socket 0 was shrunk by 34MB 00:04:15.156 EAL: Trying to obtain current memory policy. 00:04:15.156 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.156 EAL: Restoring previous memory policy: 4 00:04:15.156 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.156 EAL: request: mp_malloc_sync 00:04:15.156 EAL: No shared files mode enabled, IPC is disabled 00:04:15.156 EAL: Heap on socket 0 was expanded by 66MB 00:04:15.416 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.416 EAL: request: mp_malloc_sync 00:04:15.416 EAL: No shared files mode enabled, IPC is disabled 00:04:15.416 EAL: Heap on socket 0 was shrunk by 66MB 00:04:15.416 EAL: Trying to obtain current memory policy. 00:04:15.416 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.416 EAL: Restoring previous memory policy: 4 00:04:15.416 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.416 EAL: request: mp_malloc_sync 00:04:15.416 EAL: No shared files mode enabled, IPC is disabled 00:04:15.416 EAL: Heap on socket 0 was expanded by 130MB 00:04:15.675 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.675 EAL: request: mp_malloc_sync 00:04:15.675 EAL: No shared files mode enabled, IPC is disabled 00:04:15.675 EAL: Heap on socket 0 was shrunk by 130MB 00:04:15.934 EAL: Trying to obtain current memory policy. 00:04:15.934 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.934 EAL: Restoring previous memory policy: 4 00:04:15.934 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.934 EAL: request: mp_malloc_sync 00:04:15.934 EAL: No shared files mode enabled, IPC is disabled 00:04:15.934 EAL: Heap on socket 0 was expanded by 258MB 00:04:16.504 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.504 EAL: request: mp_malloc_sync 00:04:16.504 EAL: No shared files mode enabled, IPC is disabled 00:04:16.504 EAL: Heap on socket 0 was shrunk by 258MB 00:04:17.072 EAL: Trying to obtain current memory policy. 00:04:17.072 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.072 EAL: Restoring previous memory policy: 4 00:04:17.072 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.072 EAL: request: mp_malloc_sync 00:04:17.072 EAL: No shared files mode enabled, IPC is disabled 00:04:17.072 EAL: Heap on socket 0 was expanded by 514MB 00:04:18.011 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.270 EAL: request: mp_malloc_sync 00:04:18.270 EAL: No shared files mode enabled, IPC is disabled 00:04:18.270 EAL: Heap on socket 0 was shrunk by 514MB 00:04:18.838 EAL: Trying to obtain current memory policy. 00:04:18.838 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.405 EAL: Restoring previous memory policy: 4 00:04:19.405 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.405 EAL: request: mp_malloc_sync 00:04:19.405 EAL: No shared files mode enabled, IPC is disabled 00:04:19.405 EAL: Heap on socket 0 was expanded by 1026MB 00:04:21.348 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.348 EAL: request: mp_malloc_sync 00:04:21.348 EAL: No shared files mode enabled, IPC is disabled 00:04:21.348 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:23.266 passed 00:04:23.266 00:04:23.266 Run Summary: Type Total Ran Passed Failed Inactive 00:04:23.266 suites 1 1 n/a 0 0 00:04:23.266 tests 2 2 2 0 0 00:04:23.266 asserts 497 497 497 0 n/a 00:04:23.266 00:04:23.266 Elapsed time = 8.318 seconds 00:04:23.266 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.266 EAL: request: mp_malloc_sync 00:04:23.266 EAL: No shared files mode enabled, IPC is disabled 00:04:23.266 EAL: Heap on socket 0 was shrunk by 2MB 00:04:23.266 EAL: No shared files mode enabled, IPC is disabled 00:04:23.266 EAL: No shared files mode enabled, IPC is disabled 00:04:23.266 EAL: No shared files mode enabled, IPC is disabled 00:04:23.266 00:04:23.266 real 0m8.584s 00:04:23.266 user 0m7.483s 00:04:23.266 sys 0m1.036s 00:04:23.266 05:56:34 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.266 05:56:34 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:23.266 ************************************ 00:04:23.266 END TEST env_vtophys 00:04:23.266 ************************************ 00:04:23.266 05:56:34 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:23.266 05:56:34 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.266 05:56:34 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.266 05:56:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.266 ************************************ 00:04:23.266 START TEST env_pci 00:04:23.266 ************************************ 00:04:23.266 05:56:34 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:23.266 00:04:23.266 00:04:23.266 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.266 http://cunit.sourceforge.net/ 00:04:23.266 00:04:23.266 00:04:23.266 Suite: pci 00:04:23.266 Test: pci_hook ...[2024-07-26 05:56:34.283803] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2321 has claimed it 00:04:23.266 EAL: Cannot find device (10000:00:01.0) 00:04:23.266 EAL: Failed to attach device on primary process 00:04:23.266 passed 00:04:23.266 00:04:23.266 Run Summary: Type Total Ran Passed Failed Inactive 00:04:23.266 suites 1 1 n/a 0 0 00:04:23.266 tests 1 1 1 0 0 00:04:23.266 asserts 25 25 25 0 n/a 00:04:23.266 00:04:23.266 Elapsed time = 0.043 seconds 00:04:23.266 00:04:23.266 real 0m0.094s 00:04:23.266 user 0m0.031s 00:04:23.266 sys 0m0.061s 00:04:23.266 05:56:34 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.266 05:56:34 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:23.266 ************************************ 00:04:23.266 END TEST env_pci 00:04:23.266 ************************************ 00:04:23.266 05:56:34 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:23.266 05:56:34 env -- env/env.sh@15 -- # uname 00:04:23.266 05:56:34 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:23.266 05:56:34 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:23.266 05:56:34 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:23.266 05:56:34 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:23.266 05:56:34 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.266 05:56:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.266 ************************************ 00:04:23.266 START TEST env_dpdk_post_init 00:04:23.266 ************************************ 00:04:23.266 05:56:34 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:23.266 EAL: Detected CPU lcores: 48 00:04:23.266 EAL: Detected NUMA nodes: 2 00:04:23.266 EAL: Detected shared linkage of DPDK 00:04:23.266 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:23.266 EAL: Selected IOVA mode 'VA' 00:04:23.266 EAL: No free 2048 kB hugepages reported on node 1 00:04:23.266 EAL: VFIO support initialized 00:04:23.266 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:23.526 EAL: Using IOMMU type 1 (Type 1) 00:04:23.526 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:23.526 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:23.526 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:23.526 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:23.526 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:23.526 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:23.526 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:23.526 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:23.526 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:23.526 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:23.526 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:23.526 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:23.526 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:23.526 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:23.526 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:23.526 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:24.464 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:27.752 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:27.752 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:27.752 Starting DPDK initialization... 00:04:27.752 Starting SPDK post initialization... 00:04:27.752 SPDK NVMe probe 00:04:27.752 Attaching to 0000:88:00.0 00:04:27.752 Attached to 0000:88:00.0 00:04:27.752 Cleaning up... 00:04:27.752 00:04:27.752 real 0m4.558s 00:04:27.752 user 0m3.378s 00:04:27.752 sys 0m0.235s 00:04:27.752 05:56:38 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.752 05:56:38 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:27.752 ************************************ 00:04:27.752 END TEST env_dpdk_post_init 00:04:27.752 ************************************ 00:04:27.752 05:56:38 env -- env/env.sh@26 -- # uname 00:04:27.752 05:56:38 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:27.752 05:56:38 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:27.752 05:56:38 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.752 05:56:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.752 05:56:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.752 ************************************ 00:04:27.752 START TEST env_mem_callbacks 00:04:27.752 ************************************ 00:04:27.752 05:56:39 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:27.752 EAL: Detected CPU lcores: 48 00:04:27.752 EAL: Detected NUMA nodes: 2 00:04:27.752 EAL: Detected shared linkage of DPDK 00:04:27.752 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:28.010 EAL: Selected IOVA mode 'VA' 00:04:28.010 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.010 EAL: VFIO support initialized 00:04:28.010 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:28.010 00:04:28.010 00:04:28.010 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.010 http://cunit.sourceforge.net/ 00:04:28.010 00:04:28.010 00:04:28.010 Suite: memory 00:04:28.010 Test: test ... 00:04:28.010 register 0x200000200000 2097152 00:04:28.010 malloc 3145728 00:04:28.010 register 0x200000400000 4194304 00:04:28.010 buf 0x2000004fffc0 len 3145728 PASSED 00:04:28.010 malloc 64 00:04:28.010 buf 0x2000004ffec0 len 64 PASSED 00:04:28.010 malloc 4194304 00:04:28.010 register 0x200000800000 6291456 00:04:28.010 buf 0x2000009fffc0 len 4194304 PASSED 00:04:28.010 free 0x2000004fffc0 3145728 00:04:28.010 free 0x2000004ffec0 64 00:04:28.010 unregister 0x200000400000 4194304 PASSED 00:04:28.010 free 0x2000009fffc0 4194304 00:04:28.010 unregister 0x200000800000 6291456 PASSED 00:04:28.010 malloc 8388608 00:04:28.010 register 0x200000400000 10485760 00:04:28.010 buf 0x2000005fffc0 len 8388608 PASSED 00:04:28.010 free 0x2000005fffc0 8388608 00:04:28.010 unregister 0x200000400000 10485760 PASSED 00:04:28.010 passed 00:04:28.010 00:04:28.010 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.010 suites 1 1 n/a 0 0 00:04:28.010 tests 1 1 1 0 0 00:04:28.010 asserts 15 15 15 0 n/a 00:04:28.010 00:04:28.010 Elapsed time = 0.060 seconds 00:04:28.010 00:04:28.010 real 0m0.179s 00:04:28.010 user 0m0.095s 00:04:28.010 sys 0m0.083s 00:04:28.010 05:56:39 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.010 05:56:39 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:28.010 ************************************ 00:04:28.010 END TEST env_mem_callbacks 00:04:28.010 ************************************ 00:04:28.010 00:04:28.010 real 0m14.009s 00:04:28.010 user 0m11.409s 00:04:28.010 sys 0m1.606s 00:04:28.011 05:56:39 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.011 05:56:39 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.011 ************************************ 00:04:28.011 END TEST env 00:04:28.011 ************************************ 00:04:28.011 05:56:39 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:28.011 05:56:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.011 05:56:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.011 05:56:39 -- common/autotest_common.sh@10 -- # set +x 00:04:28.011 ************************************ 00:04:28.011 START TEST rpc 00:04:28.011 ************************************ 00:04:28.011 05:56:39 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:28.011 * Looking for test storage... 00:04:28.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:28.011 05:56:39 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3155 00:04:28.011 05:56:39 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:28.011 05:56:39 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:28.011 05:56:39 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3155 00:04:28.011 05:56:39 rpc -- common/autotest_common.sh@831 -- # '[' -z 3155 ']' 00:04:28.011 05:56:39 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.011 05:56:39 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:28.011 05:56:39 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.011 05:56:39 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:28.011 05:56:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.270 [2024-07-26 05:56:39.390095] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:28.270 [2024-07-26 05:56:39.390234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3155 ] 00:04:28.270 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.270 [2024-07-26 05:56:39.524290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.530 [2024-07-26 05:56:39.776132] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:28.530 [2024-07-26 05:56:39.776193] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3155' to capture a snapshot of events at runtime. 00:04:28.530 [2024-07-26 05:56:39.776219] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:28.530 [2024-07-26 05:56:39.776248] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:28.530 [2024-07-26 05:56:39.776268] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3155 for offline analysis/debug. 00:04:28.530 [2024-07-26 05:56:39.776322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.468 05:56:40 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:29.468 05:56:40 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:29.468 05:56:40 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:29.468 05:56:40 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:29.468 05:56:40 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:29.468 05:56:40 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:29.468 05:56:40 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.468 05:56:40 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.468 05:56:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.468 ************************************ 00:04:29.468 START TEST rpc_integrity 00:04:29.468 ************************************ 00:04:29.468 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:29.468 05:56:40 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:29.468 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.468 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.468 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.468 05:56:40 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:29.468 05:56:40 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:29.468 05:56:40 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:29.468 05:56:40 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:29.468 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.468 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.468 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.468 05:56:40 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:29.468 05:56:40 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:29.468 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.468 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.468 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.468 05:56:40 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:29.468 { 00:04:29.468 "name": "Malloc0", 00:04:29.468 "aliases": [ 00:04:29.468 "32b4cef8-909d-46dd-9740-e0f108ebb751" 00:04:29.468 ], 00:04:29.468 "product_name": "Malloc disk", 00:04:29.468 "block_size": 512, 00:04:29.468 "num_blocks": 16384, 00:04:29.468 "uuid": "32b4cef8-909d-46dd-9740-e0f108ebb751", 00:04:29.468 "assigned_rate_limits": { 00:04:29.468 "rw_ios_per_sec": 0, 00:04:29.468 "rw_mbytes_per_sec": 0, 00:04:29.468 "r_mbytes_per_sec": 0, 00:04:29.468 "w_mbytes_per_sec": 0 00:04:29.468 }, 00:04:29.468 "claimed": false, 00:04:29.468 "zoned": false, 00:04:29.468 "supported_io_types": { 00:04:29.468 "read": true, 00:04:29.468 "write": true, 00:04:29.468 "unmap": true, 00:04:29.468 "flush": true, 00:04:29.468 "reset": true, 00:04:29.468 "nvme_admin": false, 00:04:29.468 "nvme_io": false, 00:04:29.468 "nvme_io_md": false, 00:04:29.468 "write_zeroes": true, 00:04:29.468 "zcopy": true, 00:04:29.468 "get_zone_info": false, 00:04:29.468 "zone_management": false, 00:04:29.468 "zone_append": false, 00:04:29.468 "compare": false, 00:04:29.468 "compare_and_write": false, 00:04:29.468 "abort": true, 00:04:29.468 "seek_hole": false, 00:04:29.469 "seek_data": false, 00:04:29.469 "copy": true, 00:04:29.469 "nvme_iov_md": false 00:04:29.469 }, 00:04:29.469 "memory_domains": [ 00:04:29.469 { 00:04:29.469 "dma_device_id": "system", 00:04:29.469 "dma_device_type": 1 00:04:29.469 }, 00:04:29.469 { 00:04:29.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.469 "dma_device_type": 2 00:04:29.469 } 00:04:29.469 ], 00:04:29.469 "driver_specific": {} 00:04:29.469 } 00:04:29.469 ]' 00:04:29.469 05:56:40 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:29.729 05:56:40 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:29.729 05:56:40 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:29.729 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.729 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.729 [2024-07-26 05:56:40.813768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:29.729 [2024-07-26 05:56:40.813855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:29.729 [2024-07-26 05:56:40.813903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022880 00:04:29.729 [2024-07-26 05:56:40.813934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:29.729 [2024-07-26 05:56:40.816690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:29.729 [2024-07-26 05:56:40.816733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:29.729 Passthru0 00:04:29.729 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.729 05:56:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:29.729 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.729 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.729 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.729 05:56:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:29.729 { 00:04:29.729 "name": "Malloc0", 00:04:29.729 "aliases": [ 00:04:29.729 "32b4cef8-909d-46dd-9740-e0f108ebb751" 00:04:29.729 ], 00:04:29.729 "product_name": "Malloc disk", 00:04:29.729 "block_size": 512, 00:04:29.729 "num_blocks": 16384, 00:04:29.729 "uuid": "32b4cef8-909d-46dd-9740-e0f108ebb751", 00:04:29.729 "assigned_rate_limits": { 00:04:29.729 "rw_ios_per_sec": 0, 00:04:29.729 "rw_mbytes_per_sec": 0, 00:04:29.729 "r_mbytes_per_sec": 0, 00:04:29.729 "w_mbytes_per_sec": 0 00:04:29.729 }, 00:04:29.729 "claimed": true, 00:04:29.729 "claim_type": "exclusive_write", 00:04:29.729 "zoned": false, 00:04:29.729 "supported_io_types": { 00:04:29.729 "read": true, 00:04:29.729 "write": true, 00:04:29.729 "unmap": true, 00:04:29.729 "flush": true, 00:04:29.729 "reset": true, 00:04:29.729 "nvme_admin": false, 00:04:29.729 "nvme_io": false, 00:04:29.729 "nvme_io_md": false, 00:04:29.729 "write_zeroes": true, 00:04:29.729 "zcopy": true, 00:04:29.729 "get_zone_info": false, 00:04:29.729 "zone_management": false, 00:04:29.729 "zone_append": false, 00:04:29.729 "compare": false, 00:04:29.729 "compare_and_write": false, 00:04:29.729 "abort": true, 00:04:29.729 "seek_hole": false, 00:04:29.729 "seek_data": false, 00:04:29.729 "copy": true, 00:04:29.729 "nvme_iov_md": false 00:04:29.729 }, 00:04:29.729 "memory_domains": [ 00:04:29.729 { 00:04:29.729 "dma_device_id": "system", 00:04:29.729 "dma_device_type": 1 00:04:29.729 }, 00:04:29.729 { 00:04:29.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.729 "dma_device_type": 2 00:04:29.729 } 00:04:29.729 ], 00:04:29.729 "driver_specific": {} 00:04:29.729 }, 00:04:29.729 { 00:04:29.729 "name": "Passthru0", 00:04:29.729 "aliases": [ 00:04:29.729 "d0cbdf5f-8277-54aa-bbba-46d441e9d820" 00:04:29.729 ], 00:04:29.729 "product_name": "passthru", 00:04:29.729 "block_size": 512, 00:04:29.729 "num_blocks": 16384, 00:04:29.729 "uuid": "d0cbdf5f-8277-54aa-bbba-46d441e9d820", 00:04:29.729 "assigned_rate_limits": { 00:04:29.729 "rw_ios_per_sec": 0, 00:04:29.729 "rw_mbytes_per_sec": 0, 00:04:29.729 "r_mbytes_per_sec": 0, 00:04:29.729 "w_mbytes_per_sec": 0 00:04:29.729 }, 00:04:29.729 "claimed": false, 00:04:29.729 "zoned": false, 00:04:29.729 "supported_io_types": { 00:04:29.729 "read": true, 00:04:29.729 "write": true, 00:04:29.729 "unmap": true, 00:04:29.729 "flush": true, 00:04:29.729 "reset": true, 00:04:29.729 "nvme_admin": false, 00:04:29.729 "nvme_io": false, 00:04:29.729 "nvme_io_md": false, 00:04:29.729 "write_zeroes": true, 00:04:29.729 "zcopy": true, 00:04:29.729 "get_zone_info": false, 00:04:29.729 "zone_management": false, 00:04:29.729 "zone_append": false, 00:04:29.729 "compare": false, 00:04:29.729 "compare_and_write": false, 00:04:29.729 "abort": true, 00:04:29.729 "seek_hole": false, 00:04:29.729 "seek_data": false, 00:04:29.729 "copy": true, 00:04:29.729 "nvme_iov_md": false 00:04:29.729 }, 00:04:29.729 "memory_domains": [ 00:04:29.729 { 00:04:29.729 "dma_device_id": "system", 00:04:29.729 "dma_device_type": 1 00:04:29.729 }, 00:04:29.729 { 00:04:29.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.729 "dma_device_type": 2 00:04:29.729 } 00:04:29.729 ], 00:04:29.729 "driver_specific": { 00:04:29.729 "passthru": { 00:04:29.729 "name": "Passthru0", 00:04:29.729 "base_bdev_name": "Malloc0" 00:04:29.729 } 00:04:29.729 } 00:04:29.729 } 00:04:29.729 ]' 00:04:29.729 05:56:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:29.729 05:56:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:29.729 05:56:40 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:29.729 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.729 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.729 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.729 05:56:40 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:29.729 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.729 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.729 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.729 05:56:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:29.729 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.729 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.729 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.729 05:56:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:29.729 05:56:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:29.729 05:56:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:29.729 00:04:29.729 real 0m0.265s 00:04:29.729 user 0m0.157s 00:04:29.729 sys 0m0.021s 00:04:29.729 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.729 05:56:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.729 ************************************ 00:04:29.729 END TEST rpc_integrity 00:04:29.729 ************************************ 00:04:29.729 05:56:40 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:29.729 05:56:40 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.729 05:56:40 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.729 05:56:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.729 ************************************ 00:04:29.729 START TEST rpc_plugins 00:04:29.729 ************************************ 00:04:29.729 05:56:41 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:29.729 05:56:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:29.729 05:56:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.729 05:56:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:29.729 05:56:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.729 05:56:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:29.729 05:56:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:29.729 05:56:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.729 05:56:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:29.729 05:56:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.729 05:56:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:29.729 { 00:04:29.729 "name": "Malloc1", 00:04:29.729 "aliases": [ 00:04:29.729 "f0f12e84-d375-458c-92dc-3efe618a6bb4" 00:04:29.729 ], 00:04:29.729 "product_name": "Malloc disk", 00:04:29.729 "block_size": 4096, 00:04:29.729 "num_blocks": 256, 00:04:29.729 "uuid": "f0f12e84-d375-458c-92dc-3efe618a6bb4", 00:04:29.729 "assigned_rate_limits": { 00:04:29.729 "rw_ios_per_sec": 0, 00:04:29.729 "rw_mbytes_per_sec": 0, 00:04:29.730 "r_mbytes_per_sec": 0, 00:04:29.730 "w_mbytes_per_sec": 0 00:04:29.730 }, 00:04:29.730 "claimed": false, 00:04:29.730 "zoned": false, 00:04:29.730 "supported_io_types": { 00:04:29.730 "read": true, 00:04:29.730 "write": true, 00:04:29.730 "unmap": true, 00:04:29.730 "flush": true, 00:04:29.730 "reset": true, 00:04:29.730 "nvme_admin": false, 00:04:29.730 "nvme_io": false, 00:04:29.730 "nvme_io_md": false, 00:04:29.730 "write_zeroes": true, 00:04:29.730 "zcopy": true, 00:04:29.730 "get_zone_info": false, 00:04:29.730 "zone_management": false, 00:04:29.730 "zone_append": false, 00:04:29.730 "compare": false, 00:04:29.730 "compare_and_write": false, 00:04:29.730 "abort": true, 00:04:29.730 "seek_hole": false, 00:04:29.730 "seek_data": false, 00:04:29.730 "copy": true, 00:04:29.730 "nvme_iov_md": false 00:04:29.730 }, 00:04:29.730 "memory_domains": [ 00:04:29.730 { 00:04:29.730 "dma_device_id": "system", 00:04:29.730 "dma_device_type": 1 00:04:29.730 }, 00:04:29.730 { 00:04:29.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.730 "dma_device_type": 2 00:04:29.730 } 00:04:29.730 ], 00:04:29.730 "driver_specific": {} 00:04:29.730 } 00:04:29.730 ]' 00:04:29.730 05:56:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:29.988 05:56:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:29.988 05:56:41 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:29.988 05:56:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.988 05:56:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:29.988 05:56:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.988 05:56:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:29.988 05:56:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.988 05:56:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:29.988 05:56:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.988 05:56:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:29.988 05:56:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:29.988 05:56:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:29.988 00:04:29.988 real 0m0.113s 00:04:29.988 user 0m0.072s 00:04:29.988 sys 0m0.010s 00:04:29.988 05:56:41 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.988 05:56:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:29.988 ************************************ 00:04:29.988 END TEST rpc_plugins 00:04:29.988 ************************************ 00:04:29.988 05:56:41 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:29.988 05:56:41 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.988 05:56:41 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.988 05:56:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.988 ************************************ 00:04:29.988 START TEST rpc_trace_cmd_test 00:04:29.988 ************************************ 00:04:29.988 05:56:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:29.988 05:56:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:29.988 05:56:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:29.988 05:56:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.988 05:56:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:29.988 05:56:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.988 05:56:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:29.988 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3155", 00:04:29.988 "tpoint_group_mask": "0x8", 00:04:29.988 "iscsi_conn": { 00:04:29.988 "mask": "0x2", 00:04:29.988 "tpoint_mask": "0x0" 00:04:29.988 }, 00:04:29.988 "scsi": { 00:04:29.988 "mask": "0x4", 00:04:29.988 "tpoint_mask": "0x0" 00:04:29.988 }, 00:04:29.988 "bdev": { 00:04:29.988 "mask": "0x8", 00:04:29.988 "tpoint_mask": "0xffffffffffffffff" 00:04:29.988 }, 00:04:29.988 "nvmf_rdma": { 00:04:29.988 "mask": "0x10", 00:04:29.988 "tpoint_mask": "0x0" 00:04:29.988 }, 00:04:29.988 "nvmf_tcp": { 00:04:29.988 "mask": "0x20", 00:04:29.988 "tpoint_mask": "0x0" 00:04:29.988 }, 00:04:29.988 "ftl": { 00:04:29.988 "mask": "0x40", 00:04:29.988 "tpoint_mask": "0x0" 00:04:29.988 }, 00:04:29.988 "blobfs": { 00:04:29.988 "mask": "0x80", 00:04:29.988 "tpoint_mask": "0x0" 00:04:29.988 }, 00:04:29.988 "dsa": { 00:04:29.988 "mask": "0x200", 00:04:29.988 "tpoint_mask": "0x0" 00:04:29.988 }, 00:04:29.988 "thread": { 00:04:29.988 "mask": "0x400", 00:04:29.988 "tpoint_mask": "0x0" 00:04:29.988 }, 00:04:29.988 "nvme_pcie": { 00:04:29.988 "mask": "0x800", 00:04:29.988 "tpoint_mask": "0x0" 00:04:29.988 }, 00:04:29.988 "iaa": { 00:04:29.988 "mask": "0x1000", 00:04:29.989 "tpoint_mask": "0x0" 00:04:29.989 }, 00:04:29.989 "nvme_tcp": { 00:04:29.989 "mask": "0x2000", 00:04:29.989 "tpoint_mask": "0x0" 00:04:29.989 }, 00:04:29.989 "bdev_nvme": { 00:04:29.989 "mask": "0x4000", 00:04:29.989 "tpoint_mask": "0x0" 00:04:29.989 }, 00:04:29.989 "sock": { 00:04:29.989 "mask": "0x8000", 00:04:29.989 "tpoint_mask": "0x0" 00:04:29.989 } 00:04:29.989 }' 00:04:29.989 05:56:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:29.989 05:56:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:29.989 05:56:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:29.989 05:56:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:29.989 05:56:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:29.989 05:56:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:29.989 05:56:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:29.989 05:56:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:29.989 05:56:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:30.247 05:56:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:30.247 00:04:30.247 real 0m0.195s 00:04:30.247 user 0m0.173s 00:04:30.247 sys 0m0.014s 00:04:30.247 05:56:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.247 05:56:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:30.247 ************************************ 00:04:30.247 END TEST rpc_trace_cmd_test 00:04:30.247 ************************************ 00:04:30.247 05:56:41 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:30.247 05:56:41 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:30.247 05:56:41 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:30.247 05:56:41 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.247 05:56:41 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.247 05:56:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.247 ************************************ 00:04:30.247 START TEST rpc_daemon_integrity 00:04:30.247 ************************************ 00:04:30.247 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:30.247 05:56:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:30.247 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.247 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.247 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.247 05:56:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:30.248 { 00:04:30.248 "name": "Malloc2", 00:04:30.248 "aliases": [ 00:04:30.248 "581a35e9-aaf1-4598-8bc9-bbe149a7d6e0" 00:04:30.248 ], 00:04:30.248 "product_name": "Malloc disk", 00:04:30.248 "block_size": 512, 00:04:30.248 "num_blocks": 16384, 00:04:30.248 "uuid": "581a35e9-aaf1-4598-8bc9-bbe149a7d6e0", 00:04:30.248 "assigned_rate_limits": { 00:04:30.248 "rw_ios_per_sec": 0, 00:04:30.248 "rw_mbytes_per_sec": 0, 00:04:30.248 "r_mbytes_per_sec": 0, 00:04:30.248 "w_mbytes_per_sec": 0 00:04:30.248 }, 00:04:30.248 "claimed": false, 00:04:30.248 "zoned": false, 00:04:30.248 "supported_io_types": { 00:04:30.248 "read": true, 00:04:30.248 "write": true, 00:04:30.248 "unmap": true, 00:04:30.248 "flush": true, 00:04:30.248 "reset": true, 00:04:30.248 "nvme_admin": false, 00:04:30.248 "nvme_io": false, 00:04:30.248 "nvme_io_md": false, 00:04:30.248 "write_zeroes": true, 00:04:30.248 "zcopy": true, 00:04:30.248 "get_zone_info": false, 00:04:30.248 "zone_management": false, 00:04:30.248 "zone_append": false, 00:04:30.248 "compare": false, 00:04:30.248 "compare_and_write": false, 00:04:30.248 "abort": true, 00:04:30.248 "seek_hole": false, 00:04:30.248 "seek_data": false, 00:04:30.248 "copy": true, 00:04:30.248 "nvme_iov_md": false 00:04:30.248 }, 00:04:30.248 "memory_domains": [ 00:04:30.248 { 00:04:30.248 "dma_device_id": "system", 00:04:30.248 "dma_device_type": 1 00:04:30.248 }, 00:04:30.248 { 00:04:30.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.248 "dma_device_type": 2 00:04:30.248 } 00:04:30.248 ], 00:04:30.248 "driver_specific": {} 00:04:30.248 } 00:04:30.248 ]' 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.248 [2024-07-26 05:56:41.519492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:30.248 [2024-07-26 05:56:41.519567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:30.248 [2024-07-26 05:56:41.519611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000023a80 00:04:30.248 [2024-07-26 05:56:41.519640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:30.248 [2024-07-26 05:56:41.522324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:30.248 [2024-07-26 05:56:41.522376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:30.248 Passthru0 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:30.248 { 00:04:30.248 "name": "Malloc2", 00:04:30.248 "aliases": [ 00:04:30.248 "581a35e9-aaf1-4598-8bc9-bbe149a7d6e0" 00:04:30.248 ], 00:04:30.248 "product_name": "Malloc disk", 00:04:30.248 "block_size": 512, 00:04:30.248 "num_blocks": 16384, 00:04:30.248 "uuid": "581a35e9-aaf1-4598-8bc9-bbe149a7d6e0", 00:04:30.248 "assigned_rate_limits": { 00:04:30.248 "rw_ios_per_sec": 0, 00:04:30.248 "rw_mbytes_per_sec": 0, 00:04:30.248 "r_mbytes_per_sec": 0, 00:04:30.248 "w_mbytes_per_sec": 0 00:04:30.248 }, 00:04:30.248 "claimed": true, 00:04:30.248 "claim_type": "exclusive_write", 00:04:30.248 "zoned": false, 00:04:30.248 "supported_io_types": { 00:04:30.248 "read": true, 00:04:30.248 "write": true, 00:04:30.248 "unmap": true, 00:04:30.248 "flush": true, 00:04:30.248 "reset": true, 00:04:30.248 "nvme_admin": false, 00:04:30.248 "nvme_io": false, 00:04:30.248 "nvme_io_md": false, 00:04:30.248 "write_zeroes": true, 00:04:30.248 "zcopy": true, 00:04:30.248 "get_zone_info": false, 00:04:30.248 "zone_management": false, 00:04:30.248 "zone_append": false, 00:04:30.248 "compare": false, 00:04:30.248 "compare_and_write": false, 00:04:30.248 "abort": true, 00:04:30.248 "seek_hole": false, 00:04:30.248 "seek_data": false, 00:04:30.248 "copy": true, 00:04:30.248 "nvme_iov_md": false 00:04:30.248 }, 00:04:30.248 "memory_domains": [ 00:04:30.248 { 00:04:30.248 "dma_device_id": "system", 00:04:30.248 "dma_device_type": 1 00:04:30.248 }, 00:04:30.248 { 00:04:30.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.248 "dma_device_type": 2 00:04:30.248 } 00:04:30.248 ], 00:04:30.248 "driver_specific": {} 00:04:30.248 }, 00:04:30.248 { 00:04:30.248 "name": "Passthru0", 00:04:30.248 "aliases": [ 00:04:30.248 "586652ee-92cf-5dbb-8951-d0d02425616e" 00:04:30.248 ], 00:04:30.248 "product_name": "passthru", 00:04:30.248 "block_size": 512, 00:04:30.248 "num_blocks": 16384, 00:04:30.248 "uuid": "586652ee-92cf-5dbb-8951-d0d02425616e", 00:04:30.248 "assigned_rate_limits": { 00:04:30.248 "rw_ios_per_sec": 0, 00:04:30.248 "rw_mbytes_per_sec": 0, 00:04:30.248 "r_mbytes_per_sec": 0, 00:04:30.248 "w_mbytes_per_sec": 0 00:04:30.248 }, 00:04:30.248 "claimed": false, 00:04:30.248 "zoned": false, 00:04:30.248 "supported_io_types": { 00:04:30.248 "read": true, 00:04:30.248 "write": true, 00:04:30.248 "unmap": true, 00:04:30.248 "flush": true, 00:04:30.248 "reset": true, 00:04:30.248 "nvme_admin": false, 00:04:30.248 "nvme_io": false, 00:04:30.248 "nvme_io_md": false, 00:04:30.248 "write_zeroes": true, 00:04:30.248 "zcopy": true, 00:04:30.248 "get_zone_info": false, 00:04:30.248 "zone_management": false, 00:04:30.248 "zone_append": false, 00:04:30.248 "compare": false, 00:04:30.248 "compare_and_write": false, 00:04:30.248 "abort": true, 00:04:30.248 "seek_hole": false, 00:04:30.248 "seek_data": false, 00:04:30.248 "copy": true, 00:04:30.248 "nvme_iov_md": false 00:04:30.248 }, 00:04:30.248 "memory_domains": [ 00:04:30.248 { 00:04:30.248 "dma_device_id": "system", 00:04:30.248 "dma_device_type": 1 00:04:30.248 }, 00:04:30.248 { 00:04:30.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.248 "dma_device_type": 2 00:04:30.248 } 00:04:30.248 ], 00:04:30.248 "driver_specific": { 00:04:30.248 "passthru": { 00:04:30.248 "name": "Passthru0", 00:04:30.248 "base_bdev_name": "Malloc2" 00:04:30.248 } 00:04:30.248 } 00:04:30.248 } 00:04:30.248 ]' 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.248 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.507 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.507 05:56:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:30.507 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.507 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.507 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.507 05:56:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:30.507 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.507 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.507 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.507 05:56:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:30.507 05:56:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:30.507 05:56:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:30.507 00:04:30.507 real 0m0.259s 00:04:30.507 user 0m0.149s 00:04:30.507 sys 0m0.023s 00:04:30.507 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.507 05:56:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.507 ************************************ 00:04:30.507 END TEST rpc_daemon_integrity 00:04:30.507 ************************************ 00:04:30.507 05:56:41 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:30.507 05:56:41 rpc -- rpc/rpc.sh@84 -- # killprocess 3155 00:04:30.507 05:56:41 rpc -- common/autotest_common.sh@950 -- # '[' -z 3155 ']' 00:04:30.507 05:56:41 rpc -- common/autotest_common.sh@954 -- # kill -0 3155 00:04:30.507 05:56:41 rpc -- common/autotest_common.sh@955 -- # uname 00:04:30.507 05:56:41 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:30.507 05:56:41 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3155 00:04:30.507 05:56:41 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:30.507 05:56:41 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:30.507 05:56:41 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3155' 00:04:30.507 killing process with pid 3155 00:04:30.507 05:56:41 rpc -- common/autotest_common.sh@969 -- # kill 3155 00:04:30.507 05:56:41 rpc -- common/autotest_common.sh@974 -- # wait 3155 00:04:33.042 00:04:33.042 real 0m4.952s 00:04:33.042 user 0m5.512s 00:04:33.042 sys 0m0.781s 00:04:33.042 05:56:44 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.042 05:56:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.042 ************************************ 00:04:33.042 END TEST rpc 00:04:33.042 ************************************ 00:04:33.042 05:56:44 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:33.042 05:56:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.042 05:56:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.042 05:56:44 -- common/autotest_common.sh@10 -- # set +x 00:04:33.042 ************************************ 00:04:33.042 START TEST skip_rpc 00:04:33.042 ************************************ 00:04:33.042 05:56:44 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:33.042 * Looking for test storage... 00:04:33.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:33.042 05:56:44 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:33.042 05:56:44 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:33.042 05:56:44 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:33.042 05:56:44 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.042 05:56:44 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.042 05:56:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.042 ************************************ 00:04:33.042 START TEST skip_rpc 00:04:33.042 ************************************ 00:04:33.042 05:56:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:33.042 05:56:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3912 00:04:33.042 05:56:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:33.042 05:56:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.042 05:56:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:33.300 [2024-07-26 05:56:44.420776] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:33.300 [2024-07-26 05:56:44.420926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3912 ] 00:04:33.300 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.300 [2024-07-26 05:56:44.545083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.559 [2024-07-26 05:56:44.803086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3912 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 3912 ']' 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 3912 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3912 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3912' 00:04:38.840 killing process with pid 3912 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 3912 00:04:38.840 05:56:49 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 3912 00:04:40.746 00:04:40.746 real 0m7.514s 00:04:40.746 user 0m7.034s 00:04:40.746 sys 0m0.465s 00:04:40.746 05:56:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.746 05:56:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.746 ************************************ 00:04:40.746 END TEST skip_rpc 00:04:40.746 ************************************ 00:04:40.746 05:56:51 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:40.746 05:56:51 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.746 05:56:51 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.746 05:56:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.746 ************************************ 00:04:40.746 START TEST skip_rpc_with_json 00:04:40.746 ************************************ 00:04:40.746 05:56:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:40.746 05:56:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:40.746 05:56:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=4871 00:04:40.746 05:56:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:40.746 05:56:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.746 05:56:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 4871 00:04:40.746 05:56:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 4871 ']' 00:04:40.746 05:56:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.746 05:56:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:40.746 05:56:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.746 05:56:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:40.746 05:56:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.746 [2024-07-26 05:56:51.983750] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:40.746 [2024-07-26 05:56:51.983919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4871 ] 00:04:40.746 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.006 [2024-07-26 05:56:52.109386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.265 [2024-07-26 05:56:52.359657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.245 05:56:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:42.245 05:56:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:42.245 05:56:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:42.245 05:56:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.245 05:56:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.245 [2024-07-26 05:56:53.219708] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:42.245 request: 00:04:42.245 { 00:04:42.245 "trtype": "tcp", 00:04:42.245 "method": "nvmf_get_transports", 00:04:42.245 "req_id": 1 00:04:42.245 } 00:04:42.245 Got JSON-RPC error response 00:04:42.245 response: 00:04:42.245 { 00:04:42.245 "code": -19, 00:04:42.245 "message": "No such device" 00:04:42.245 } 00:04:42.245 05:56:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:42.245 05:56:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:42.245 05:56:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.245 05:56:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.245 [2024-07-26 05:56:53.227844] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:42.245 05:56:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.245 05:56:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:42.245 05:56:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.245 05:56:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.245 05:56:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.245 05:56:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:42.245 { 00:04:42.245 "subsystems": [ 00:04:42.245 { 00:04:42.245 "subsystem": "keyring", 00:04:42.245 "config": [] 00:04:42.245 }, 00:04:42.245 { 00:04:42.246 "subsystem": "iobuf", 00:04:42.246 "config": [ 00:04:42.246 { 00:04:42.246 "method": "iobuf_set_options", 00:04:42.246 "params": { 00:04:42.246 "small_pool_count": 8192, 00:04:42.246 "large_pool_count": 1024, 00:04:42.246 "small_bufsize": 8192, 00:04:42.246 "large_bufsize": 135168 00:04:42.246 } 00:04:42.246 } 00:04:42.246 ] 00:04:42.246 }, 00:04:42.246 { 00:04:42.246 "subsystem": "sock", 00:04:42.246 "config": [ 00:04:42.246 { 00:04:42.246 "method": "sock_set_default_impl", 00:04:42.246 "params": { 00:04:42.246 "impl_name": "posix" 00:04:42.246 } 00:04:42.246 }, 00:04:42.246 { 00:04:42.246 "method": "sock_impl_set_options", 00:04:42.246 "params": { 00:04:42.246 "impl_name": "ssl", 00:04:42.246 "recv_buf_size": 4096, 00:04:42.246 "send_buf_size": 4096, 00:04:42.246 "enable_recv_pipe": true, 00:04:42.246 "enable_quickack": false, 00:04:42.246 "enable_placement_id": 0, 00:04:42.246 "enable_zerocopy_send_server": true, 00:04:42.246 "enable_zerocopy_send_client": false, 00:04:42.246 "zerocopy_threshold": 0, 00:04:42.246 "tls_version": 0, 00:04:42.246 "enable_ktls": false 00:04:42.246 } 00:04:42.246 }, 00:04:42.246 { 00:04:42.246 "method": "sock_impl_set_options", 00:04:42.246 "params": { 00:04:42.246 "impl_name": "posix", 00:04:42.246 "recv_buf_size": 2097152, 00:04:42.246 "send_buf_size": 2097152, 00:04:42.246 "enable_recv_pipe": true, 00:04:42.246 "enable_quickack": false, 00:04:42.246 "enable_placement_id": 0, 00:04:42.246 "enable_zerocopy_send_server": true, 00:04:42.246 "enable_zerocopy_send_client": false, 00:04:42.246 "zerocopy_threshold": 0, 00:04:42.246 "tls_version": 0, 00:04:42.246 "enable_ktls": false 00:04:42.246 } 00:04:42.246 } 00:04:42.246 ] 00:04:42.246 }, 00:04:42.246 { 00:04:42.246 "subsystem": "vmd", 00:04:42.246 "config": [] 00:04:42.246 }, 00:04:42.246 { 00:04:42.246 "subsystem": "accel", 00:04:42.246 "config": [ 00:04:42.246 { 00:04:42.246 "method": "accel_set_options", 00:04:42.246 "params": { 00:04:42.246 "small_cache_size": 128, 00:04:42.246 "large_cache_size": 16, 00:04:42.246 "task_count": 2048, 00:04:42.246 "sequence_count": 2048, 00:04:42.246 "buf_count": 2048 00:04:42.246 } 00:04:42.246 } 00:04:42.246 ] 00:04:42.246 }, 00:04:42.246 { 00:04:42.246 "subsystem": "bdev", 00:04:42.246 "config": [ 00:04:42.246 { 00:04:42.247 "method": "bdev_set_options", 00:04:42.247 "params": { 00:04:42.247 "bdev_io_pool_size": 65535, 00:04:42.247 "bdev_io_cache_size": 256, 00:04:42.247 "bdev_auto_examine": true, 00:04:42.247 "iobuf_small_cache_size": 128, 00:04:42.247 "iobuf_large_cache_size": 16 00:04:42.247 } 00:04:42.247 }, 00:04:42.247 { 00:04:42.247 "method": "bdev_raid_set_options", 00:04:42.247 "params": { 00:04:42.247 "process_window_size_kb": 1024, 00:04:42.247 "process_max_bandwidth_mb_sec": 0 00:04:42.247 } 00:04:42.247 }, 00:04:42.247 { 00:04:42.247 "method": "bdev_iscsi_set_options", 00:04:42.247 "params": { 00:04:42.247 "timeout_sec": 30 00:04:42.247 } 00:04:42.247 }, 00:04:42.247 { 00:04:42.247 "method": "bdev_nvme_set_options", 00:04:42.247 "params": { 00:04:42.247 "action_on_timeout": "none", 00:04:42.247 "timeout_us": 0, 00:04:42.247 "timeout_admin_us": 0, 00:04:42.247 "keep_alive_timeout_ms": 10000, 00:04:42.247 "arbitration_burst": 0, 00:04:42.247 "low_priority_weight": 0, 00:04:42.247 "medium_priority_weight": 0, 00:04:42.247 "high_priority_weight": 0, 00:04:42.247 "nvme_adminq_poll_period_us": 10000, 00:04:42.247 "nvme_ioq_poll_period_us": 0, 00:04:42.247 "io_queue_requests": 0, 00:04:42.247 "delay_cmd_submit": true, 00:04:42.247 "transport_retry_count": 4, 00:04:42.247 "bdev_retry_count": 3, 00:04:42.247 "transport_ack_timeout": 0, 00:04:42.247 "ctrlr_loss_timeout_sec": 0, 00:04:42.247 "reconnect_delay_sec": 0, 00:04:42.247 "fast_io_fail_timeout_sec": 0, 00:04:42.248 "disable_auto_failback": false, 00:04:42.248 "generate_uuids": false, 00:04:42.248 "transport_tos": 0, 00:04:42.248 "nvme_error_stat": false, 00:04:42.248 "rdma_srq_size": 0, 00:04:42.248 "io_path_stat": false, 00:04:42.248 "allow_accel_sequence": false, 00:04:42.248 "rdma_max_cq_size": 0, 00:04:42.248 "rdma_cm_event_timeout_ms": 0, 00:04:42.248 "dhchap_digests": [ 00:04:42.248 "sha256", 00:04:42.248 "sha384", 00:04:42.248 "sha512" 00:04:42.248 ], 00:04:42.248 "dhchap_dhgroups": [ 00:04:42.248 "null", 00:04:42.248 "ffdhe2048", 00:04:42.248 "ffdhe3072", 00:04:42.248 "ffdhe4096", 00:04:42.248 "ffdhe6144", 00:04:42.248 "ffdhe8192" 00:04:42.248 ] 00:04:42.248 } 00:04:42.248 }, 00:04:42.248 { 00:04:42.248 "method": "bdev_nvme_set_hotplug", 00:04:42.248 "params": { 00:04:42.248 "period_us": 100000, 00:04:42.248 "enable": false 00:04:42.248 } 00:04:42.248 }, 00:04:42.248 { 00:04:42.248 "method": "bdev_wait_for_examine" 00:04:42.248 } 00:04:42.248 ] 00:04:42.248 }, 00:04:42.248 { 00:04:42.248 "subsystem": "scsi", 00:04:42.248 "config": null 00:04:42.248 }, 00:04:42.248 { 00:04:42.248 "subsystem": "scheduler", 00:04:42.248 "config": [ 00:04:42.248 { 00:04:42.248 "method": "framework_set_scheduler", 00:04:42.248 "params": { 00:04:42.248 "name": "static" 00:04:42.248 } 00:04:42.248 } 00:04:42.248 ] 00:04:42.249 }, 00:04:42.249 { 00:04:42.249 "subsystem": "vhost_scsi", 00:04:42.249 "config": [] 00:04:42.249 }, 00:04:42.249 { 00:04:42.249 "subsystem": "vhost_blk", 00:04:42.249 "config": [] 00:04:42.249 }, 00:04:42.249 { 00:04:42.249 "subsystem": "ublk", 00:04:42.249 "config": [] 00:04:42.249 }, 00:04:42.249 { 00:04:42.249 "subsystem": "nbd", 00:04:42.249 "config": [] 00:04:42.249 }, 00:04:42.249 { 00:04:42.249 "subsystem": "nvmf", 00:04:42.249 "config": [ 00:04:42.249 { 00:04:42.249 "method": "nvmf_set_config", 00:04:42.249 "params": { 00:04:42.249 "discovery_filter": "match_any", 00:04:42.249 "admin_cmd_passthru": { 00:04:42.249 "identify_ctrlr": false 00:04:42.249 } 00:04:42.249 } 00:04:42.249 }, 00:04:42.249 { 00:04:42.249 "method": "nvmf_set_max_subsystems", 00:04:42.249 "params": { 00:04:42.249 "max_subsystems": 1024 00:04:42.249 } 00:04:42.249 }, 00:04:42.249 { 00:04:42.249 "method": "nvmf_set_crdt", 00:04:42.249 "params": { 00:04:42.249 "crdt1": 0, 00:04:42.249 "crdt2": 0, 00:04:42.249 "crdt3": 0 00:04:42.249 } 00:04:42.249 }, 00:04:42.249 { 00:04:42.250 "method": "nvmf_create_transport", 00:04:42.250 "params": { 00:04:42.250 "trtype": "TCP", 00:04:42.250 "max_queue_depth": 128, 00:04:42.250 "max_io_qpairs_per_ctrlr": 127, 00:04:42.250 "in_capsule_data_size": 4096, 00:04:42.250 "max_io_size": 131072, 00:04:42.250 "io_unit_size": 131072, 00:04:42.250 "max_aq_depth": 128, 00:04:42.250 "num_shared_buffers": 511, 00:04:42.250 "buf_cache_size": 4294967295, 00:04:42.250 "dif_insert_or_strip": false, 00:04:42.250 "zcopy": false, 00:04:42.250 "c2h_success": true, 00:04:42.250 "sock_priority": 0, 00:04:42.250 "abort_timeout_sec": 1, 00:04:42.250 "ack_timeout": 0, 00:04:42.250 "data_wr_pool_size": 0 00:04:42.250 } 00:04:42.250 } 00:04:42.250 ] 00:04:42.250 }, 00:04:42.250 { 00:04:42.250 "subsystem": "iscsi", 00:04:42.250 "config": [ 00:04:42.250 { 00:04:42.250 "method": "iscsi_set_options", 00:04:42.250 "params": { 00:04:42.250 "node_base": "iqn.2016-06.io.spdk", 00:04:42.250 "max_sessions": 128, 00:04:42.250 "max_connections_per_session": 2, 00:04:42.250 "max_queue_depth": 64, 00:04:42.250 "default_time2wait": 2, 00:04:42.250 "default_time2retain": 20, 00:04:42.250 "first_burst_length": 8192, 00:04:42.250 "immediate_data": true, 00:04:42.250 "allow_duplicated_isid": false, 00:04:42.250 "error_recovery_level": 0, 00:04:42.250 "nop_timeout": 60, 00:04:42.250 "nop_in_interval": 30, 00:04:42.250 "disable_chap": false, 00:04:42.250 "require_chap": false, 00:04:42.250 "mutual_chap": false, 00:04:42.250 "chap_group": 0, 00:04:42.250 "max_large_datain_per_connection": 64, 00:04:42.250 "max_r2t_per_connection": 4, 00:04:42.250 "pdu_pool_size": 36864, 00:04:42.251 "immediate_data_pool_size": 16384, 00:04:42.251 "data_out_pool_size": 2048 00:04:42.251 } 00:04:42.251 } 00:04:42.251 ] 00:04:42.251 } 00:04:42.251 ] 00:04:42.251 } 00:04:42.251 05:56:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:42.251 05:56:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 4871 00:04:42.251 05:56:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 4871 ']' 00:04:42.251 05:56:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 4871 00:04:42.251 05:56:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:42.251 05:56:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:42.252 05:56:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4871 00:04:42.252 05:56:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:42.252 05:56:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:42.252 05:56:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4871' 00:04:42.252 killing process with pid 4871 00:04:42.252 05:56:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 4871 00:04:42.252 05:56:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 4871 00:04:44.798 05:56:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=5291 00:04:44.798 05:56:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:44.798 05:56:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:50.069 05:57:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 5291 00:04:50.069 05:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 5291 ']' 00:04:50.069 05:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 5291 00:04:50.069 05:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:50.069 05:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.069 05:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 5291 00:04:50.069 05:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:50.069 05:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:50.069 05:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 5291' 00:04:50.069 killing process with pid 5291 00:04:50.069 05:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 5291 00:04:50.069 05:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 5291 00:04:52.605 05:57:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:52.605 05:57:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:52.605 00:04:52.605 real 0m11.526s 00:04:52.605 user 0m11.004s 00:04:52.605 sys 0m1.028s 00:04:52.605 05:57:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.605 05:57:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.605 ************************************ 00:04:52.605 END TEST skip_rpc_with_json 00:04:52.605 ************************************ 00:04:52.605 05:57:03 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:52.605 05:57:03 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.605 05:57:03 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.605 05:57:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.605 ************************************ 00:04:52.605 START TEST skip_rpc_with_delay 00:04:52.605 ************************************ 00:04:52.605 05:57:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:52.605 05:57:03 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:52.605 05:57:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:52.605 05:57:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:52.605 05:57:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.605 05:57:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:52.605 05:57:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.605 05:57:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:52.605 05:57:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.605 05:57:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:52.605 05:57:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.605 05:57:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:52.605 05:57:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:52.605 [2024-07-26 05:57:03.558718] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:52.605 [2024-07-26 05:57:03.558917] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:52.605 05:57:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:52.605 05:57:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:52.605 05:57:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:52.605 05:57:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:52.605 00:04:52.605 real 0m0.143s 00:04:52.605 user 0m0.079s 00:04:52.605 sys 0m0.064s 00:04:52.605 05:57:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.605 05:57:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:52.605 ************************************ 00:04:52.605 END TEST skip_rpc_with_delay 00:04:52.605 ************************************ 00:04:52.605 05:57:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:52.605 05:57:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:52.605 05:57:03 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:52.605 05:57:03 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.605 05:57:03 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.605 05:57:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.605 ************************************ 00:04:52.605 START TEST exit_on_failed_rpc_init 00:04:52.605 ************************************ 00:04:52.605 05:57:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:52.605 05:57:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=6385 00:04:52.605 05:57:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:52.605 05:57:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 6385 00:04:52.605 05:57:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 6385 ']' 00:04:52.605 05:57:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.605 05:57:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:52.605 05:57:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.605 05:57:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:52.605 05:57:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:52.605 [2024-07-26 05:57:03.748473] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:52.605 [2024-07-26 05:57:03.748652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid6385 ] 00:04:52.605 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.605 [2024-07-26 05:57:03.875645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.864 [2024-07-26 05:57:04.126333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.802 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:53.802 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:53.802 05:57:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.802 05:57:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:53.802 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:53.802 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:53.802 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.802 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.802 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.802 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.802 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.802 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.802 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.802 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:53.802 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:53.802 [2024-07-26 05:57:05.110288] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:53.802 [2024-07-26 05:57:05.110471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid6646 ] 00:04:54.062 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.062 [2024-07-26 05:57:05.242042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.321 [2024-07-26 05:57:05.495081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.321 [2024-07-26 05:57:05.495273] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:54.321 [2024-07-26 05:57:05.495304] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:54.321 [2024-07-26 05:57:05.495327] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:54.892 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:54.892 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:54.892 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:54.892 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:54.892 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:54.892 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:54.892 05:57:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:54.892 05:57:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 6385 00:04:54.892 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 6385 ']' 00:04:54.892 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 6385 00:04:54.892 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:54.892 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:54.892 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 6385 00:04:54.892 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:54.892 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:54.892 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 6385' 00:04:54.892 killing process with pid 6385 00:04:54.892 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 6385 00:04:54.892 05:57:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 6385 00:04:57.428 00:04:57.428 real 0m4.819s 00:04:57.428 user 0m5.505s 00:04:57.428 sys 0m0.749s 00:04:57.428 05:57:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.428 05:57:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:57.428 ************************************ 00:04:57.428 END TEST exit_on_failed_rpc_init 00:04:57.428 ************************************ 00:04:57.428 05:57:08 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:57.428 00:04:57.428 real 0m24.250s 00:04:57.428 user 0m23.729s 00:04:57.428 sys 0m2.463s 00:04:57.428 05:57:08 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.428 05:57:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.428 ************************************ 00:04:57.428 END TEST skip_rpc 00:04:57.428 ************************************ 00:04:57.428 05:57:08 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:57.428 05:57:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.428 05:57:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.428 05:57:08 -- common/autotest_common.sh@10 -- # set +x 00:04:57.428 ************************************ 00:04:57.428 START TEST rpc_client 00:04:57.428 ************************************ 00:04:57.428 05:57:08 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:57.428 * Looking for test storage... 00:04:57.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:57.428 05:57:08 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:57.428 OK 00:04:57.428 05:57:08 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:57.428 00:04:57.428 real 0m0.094s 00:04:57.428 user 0m0.038s 00:04:57.429 sys 0m0.060s 00:04:57.429 05:57:08 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.429 05:57:08 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:57.429 ************************************ 00:04:57.429 END TEST rpc_client 00:04:57.429 ************************************ 00:04:57.429 05:57:08 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:57.429 05:57:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.429 05:57:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.429 05:57:08 -- common/autotest_common.sh@10 -- # set +x 00:04:57.429 ************************************ 00:04:57.429 START TEST json_config 00:04:57.429 ************************************ 00:04:57.429 05:57:08 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:57.429 05:57:08 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:57.429 05:57:08 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:57.429 05:57:08 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:57.429 05:57:08 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:57.429 05:57:08 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.429 05:57:08 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.429 05:57:08 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.429 05:57:08 json_config -- paths/export.sh@5 -- # export PATH 00:04:57.429 05:57:08 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@47 -- # : 0 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:57.429 05:57:08 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:57.429 05:57:08 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:57.429 05:57:08 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:57.429 05:57:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:57.429 05:57:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:57.429 05:57:08 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:57.429 05:57:08 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:57.429 05:57:08 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:57.429 05:57:08 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:57.429 05:57:08 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:57.429 05:57:08 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:57.429 05:57:08 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:57.429 05:57:08 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:57.429 05:57:08 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:57.429 05:57:08 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:57.429 05:57:08 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:57.429 05:57:08 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:57.429 INFO: JSON configuration test init 00:04:57.429 05:57:08 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:57.429 05:57:08 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:57.429 05:57:08 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:57.429 05:57:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.429 05:57:08 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:57.429 05:57:08 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:57.429 05:57:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.429 05:57:08 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:57.429 05:57:08 json_config -- json_config/common.sh@9 -- # local app=target 00:04:57.429 05:57:08 json_config -- json_config/common.sh@10 -- # shift 00:04:57.429 05:57:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:57.429 05:57:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:57.429 05:57:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:57.429 05:57:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:57.429 05:57:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:57.429 05:57:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=7595 00:04:57.429 05:57:08 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:57.429 05:57:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:57.429 Waiting for target to run... 00:04:57.429 05:57:08 json_config -- json_config/common.sh@25 -- # waitforlisten 7595 /var/tmp/spdk_tgt.sock 00:04:57.429 05:57:08 json_config -- common/autotest_common.sh@831 -- # '[' -z 7595 ']' 00:04:57.429 05:57:08 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:57.429 05:57:08 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:57.429 05:57:08 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:57.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:57.429 05:57:08 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:57.429 05:57:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.687 [2024-07-26 05:57:08.835966] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:57.687 [2024-07-26 05:57:08.836153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid7595 ] 00:04:57.687 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.251 [2024-07-26 05:57:09.376601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.508 [2024-07-26 05:57:09.613239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.508 05:57:09 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:58.508 05:57:09 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:58.508 05:57:09 json_config -- json_config/common.sh@26 -- # echo '' 00:04:58.508 00:04:58.508 05:57:09 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:58.508 05:57:09 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:58.508 05:57:09 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:58.508 05:57:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.508 05:57:09 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:58.508 05:57:09 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:58.508 05:57:09 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:58.508 05:57:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.508 05:57:09 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:58.508 05:57:09 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:58.509 05:57:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:02.704 05:57:13 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:02.704 05:57:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:02.704 05:57:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@51 -- # sort 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:02.704 05:57:13 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:02.704 05:57:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:02.704 05:57:13 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:02.704 05:57:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:02.704 05:57:13 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:02.704 05:57:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:02.961 MallocForNvmf0 00:05:02.961 05:57:14 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:02.961 05:57:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:03.218 MallocForNvmf1 00:05:03.218 05:57:14 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:03.218 05:57:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:03.473 [2024-07-26 05:57:14.608090] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:03.473 05:57:14 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:03.473 05:57:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:03.764 05:57:14 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:03.764 05:57:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:04.022 05:57:15 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:04.022 05:57:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:04.279 05:57:15 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:04.279 05:57:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:04.279 [2024-07-26 05:57:15.607566] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:04.536 05:57:15 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:04.536 05:57:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:04.536 05:57:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.536 05:57:15 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:04.536 05:57:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:04.536 05:57:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.536 05:57:15 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:04.536 05:57:15 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:04.536 05:57:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:04.793 MallocBdevForConfigChangeCheck 00:05:04.793 05:57:15 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:04.793 05:57:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:04.793 05:57:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.793 05:57:15 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:04.793 05:57:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:05.051 05:57:16 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:05.051 INFO: shutting down applications... 00:05:05.051 05:57:16 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:05.051 05:57:16 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:05.051 05:57:16 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:05.051 05:57:16 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:06.956 Calling clear_iscsi_subsystem 00:05:06.956 Calling clear_nvmf_subsystem 00:05:06.956 Calling clear_nbd_subsystem 00:05:06.956 Calling clear_ublk_subsystem 00:05:06.956 Calling clear_vhost_blk_subsystem 00:05:06.956 Calling clear_vhost_scsi_subsystem 00:05:06.956 Calling clear_bdev_subsystem 00:05:06.956 05:57:17 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:06.956 05:57:17 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:06.956 05:57:17 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:06.956 05:57:17 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:06.956 05:57:17 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:06.956 05:57:17 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:07.215 05:57:18 json_config -- json_config/json_config.sh@349 -- # break 00:05:07.215 05:57:18 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:07.215 05:57:18 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:07.215 05:57:18 json_config -- json_config/common.sh@31 -- # local app=target 00:05:07.215 05:57:18 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:07.215 05:57:18 json_config -- json_config/common.sh@35 -- # [[ -n 7595 ]] 00:05:07.215 05:57:18 json_config -- json_config/common.sh@38 -- # kill -SIGINT 7595 00:05:07.215 05:57:18 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:07.215 05:57:18 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.215 05:57:18 json_config -- json_config/common.sh@41 -- # kill -0 7595 00:05:07.215 05:57:18 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:07.784 05:57:18 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:07.784 05:57:18 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.784 05:57:18 json_config -- json_config/common.sh@41 -- # kill -0 7595 00:05:07.784 05:57:18 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:08.044 05:57:19 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:08.044 05:57:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.044 05:57:19 json_config -- json_config/common.sh@41 -- # kill -0 7595 00:05:08.044 05:57:19 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:08.609 05:57:19 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:08.609 05:57:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.609 05:57:19 json_config -- json_config/common.sh@41 -- # kill -0 7595 00:05:08.609 05:57:19 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:08.609 05:57:19 json_config -- json_config/common.sh@43 -- # break 00:05:08.609 05:57:19 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:08.609 05:57:19 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:08.609 SPDK target shutdown done 00:05:08.610 05:57:19 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:08.610 INFO: relaunching applications... 00:05:08.610 05:57:19 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:08.610 05:57:19 json_config -- json_config/common.sh@9 -- # local app=target 00:05:08.610 05:57:19 json_config -- json_config/common.sh@10 -- # shift 00:05:08.610 05:57:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:08.610 05:57:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:08.610 05:57:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:08.610 05:57:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:08.610 05:57:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:08.610 05:57:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=9129 00:05:08.610 05:57:19 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:08.610 05:57:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:08.610 Waiting for target to run... 00:05:08.610 05:57:19 json_config -- json_config/common.sh@25 -- # waitforlisten 9129 /var/tmp/spdk_tgt.sock 00:05:08.610 05:57:19 json_config -- common/autotest_common.sh@831 -- # '[' -z 9129 ']' 00:05:08.610 05:57:19 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:08.610 05:57:19 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.610 05:57:19 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:08.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:08.610 05:57:19 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.610 05:57:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.610 [2024-07-26 05:57:19.929966] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:08.610 [2024-07-26 05:57:19.930104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid9129 ] 00:05:08.869 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.436 [2024-07-26 05:57:20.510715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.436 [2024-07-26 05:57:20.748412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.624 [2024-07-26 05:57:24.464456] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:13.624 [2024-07-26 05:57:24.496939] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:13.882 05:57:25 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.882 05:57:25 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:13.882 05:57:25 json_config -- json_config/common.sh@26 -- # echo '' 00:05:13.882 00:05:13.882 05:57:25 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:13.882 05:57:25 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:13.882 INFO: Checking if target configuration is the same... 00:05:13.882 05:57:25 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.882 05:57:25 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:13.882 05:57:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.882 + '[' 2 -ne 2 ']' 00:05:13.882 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:13.882 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:13.882 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:13.882 +++ basename /dev/fd/62 00:05:13.882 ++ mktemp /tmp/62.XXX 00:05:13.882 + tmp_file_1=/tmp/62.uGS 00:05:13.882 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.882 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:13.882 + tmp_file_2=/tmp/spdk_tgt_config.json.eWQ 00:05:13.882 + ret=0 00:05:13.882 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:14.448 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:14.448 + diff -u /tmp/62.uGS /tmp/spdk_tgt_config.json.eWQ 00:05:14.448 + echo 'INFO: JSON config files are the same' 00:05:14.448 INFO: JSON config files are the same 00:05:14.448 + rm /tmp/62.uGS /tmp/spdk_tgt_config.json.eWQ 00:05:14.448 + exit 0 00:05:14.448 05:57:25 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:14.448 05:57:25 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:14.448 INFO: changing configuration and checking if this can be detected... 00:05:14.448 05:57:25 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:14.448 05:57:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:14.706 05:57:25 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.706 05:57:25 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:14.706 05:57:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:14.706 + '[' 2 -ne 2 ']' 00:05:14.706 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:14.706 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:14.706 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:14.706 +++ basename /dev/fd/62 00:05:14.706 ++ mktemp /tmp/62.XXX 00:05:14.706 + tmp_file_1=/tmp/62.MYd 00:05:14.706 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.706 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:14.706 + tmp_file_2=/tmp/spdk_tgt_config.json.09T 00:05:14.706 + ret=0 00:05:14.706 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:14.964 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:14.964 + diff -u /tmp/62.MYd /tmp/spdk_tgt_config.json.09T 00:05:14.964 + ret=1 00:05:14.964 + echo '=== Start of file: /tmp/62.MYd ===' 00:05:14.964 + cat /tmp/62.MYd 00:05:14.964 + echo '=== End of file: /tmp/62.MYd ===' 00:05:14.964 + echo '' 00:05:14.964 + echo '=== Start of file: /tmp/spdk_tgt_config.json.09T ===' 00:05:14.964 + cat /tmp/spdk_tgt_config.json.09T 00:05:14.964 + echo '=== End of file: /tmp/spdk_tgt_config.json.09T ===' 00:05:14.964 + echo '' 00:05:14.964 + rm /tmp/62.MYd /tmp/spdk_tgt_config.json.09T 00:05:14.964 + exit 1 00:05:14.964 05:57:26 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:14.964 INFO: configuration change detected. 00:05:14.964 05:57:26 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:14.964 05:57:26 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:14.964 05:57:26 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:14.964 05:57:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.964 05:57:26 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:14.964 05:57:26 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:14.964 05:57:26 json_config -- json_config/json_config.sh@321 -- # [[ -n 9129 ]] 00:05:14.965 05:57:26 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:14.965 05:57:26 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:14.965 05:57:26 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:14.965 05:57:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.965 05:57:26 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:14.965 05:57:26 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:14.965 05:57:26 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:14.965 05:57:26 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:14.965 05:57:26 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:14.965 05:57:26 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:14.965 05:57:26 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:14.965 05:57:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.965 05:57:26 json_config -- json_config/json_config.sh@327 -- # killprocess 9129 00:05:14.965 05:57:26 json_config -- common/autotest_common.sh@950 -- # '[' -z 9129 ']' 00:05:14.965 05:57:26 json_config -- common/autotest_common.sh@954 -- # kill -0 9129 00:05:14.965 05:57:26 json_config -- common/autotest_common.sh@955 -- # uname 00:05:14.965 05:57:26 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:14.965 05:57:26 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 9129 00:05:14.965 05:57:26 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:14.965 05:57:26 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:14.965 05:57:26 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 9129' 00:05:14.965 killing process with pid 9129 00:05:14.965 05:57:26 json_config -- common/autotest_common.sh@969 -- # kill 9129 00:05:14.965 05:57:26 json_config -- common/autotest_common.sh@974 -- # wait 9129 00:05:17.494 05:57:28 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:17.495 05:57:28 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:17.495 05:57:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:17.495 05:57:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.495 05:57:28 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:17.495 05:57:28 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:17.495 INFO: Success 00:05:17.495 00:05:17.495 real 0m20.045s 00:05:17.495 user 0m21.580s 00:05:17.495 sys 0m2.598s 00:05:17.495 05:57:28 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.495 05:57:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.495 ************************************ 00:05:17.495 END TEST json_config 00:05:17.495 ************************************ 00:05:17.495 05:57:28 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:17.495 05:57:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.495 05:57:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.495 05:57:28 -- common/autotest_common.sh@10 -- # set +x 00:05:17.495 ************************************ 00:05:17.495 START TEST json_config_extra_key 00:05:17.495 ************************************ 00:05:17.495 05:57:28 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:17.495 05:57:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:17.495 05:57:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:17.495 05:57:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:17.495 05:57:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:17.495 05:57:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:17.495 05:57:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:17.495 05:57:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:17.495 05:57:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:17.495 05:57:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:17.495 05:57:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:17.495 05:57:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:17.495 05:57:28 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:17.495 05:57:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:17.495 05:57:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:17.495 05:57:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:17.495 05:57:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:17.495 05:57:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:17.495 05:57:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:17.495 05:57:28 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:17.495 05:57:28 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:17.495 05:57:28 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:17.495 05:57:28 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:17.495 05:57:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.495 05:57:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.495 05:57:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.495 05:57:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:17.495 05:57:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.495 05:57:28 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:17.495 05:57:28 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:17.495 05:57:28 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:17.495 05:57:28 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:17.495 05:57:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:17.495 05:57:28 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:17.754 05:57:28 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:17.754 05:57:28 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:17.754 05:57:28 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:17.754 05:57:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:17.754 05:57:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:17.754 05:57:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:17.754 05:57:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:17.755 05:57:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:17.755 05:57:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:17.755 05:57:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:17.755 05:57:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:17.755 05:57:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:17.755 05:57:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:17.755 05:57:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:17.755 INFO: launching applications... 00:05:17.755 05:57:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:17.755 05:57:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:17.755 05:57:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:17.755 05:57:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:17.755 05:57:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:17.755 05:57:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:17.755 05:57:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.755 05:57:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.755 05:57:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=10317 00:05:17.755 05:57:28 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:17.755 05:57:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:17.755 Waiting for target to run... 00:05:17.755 05:57:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 10317 /var/tmp/spdk_tgt.sock 00:05:17.755 05:57:28 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 10317 ']' 00:05:17.755 05:57:28 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:17.755 05:57:28 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:17.755 05:57:28 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:17.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:17.755 05:57:28 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:17.755 05:57:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:17.755 [2024-07-26 05:57:28.922951] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:17.755 [2024-07-26 05:57:28.923120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid10317 ] 00:05:17.755 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.323 [2024-07-26 05:57:29.512085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.581 [2024-07-26 05:57:29.751051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.147 05:57:30 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.147 05:57:30 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:19.147 05:57:30 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:19.147 00:05:19.148 05:57:30 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:19.148 INFO: shutting down applications... 00:05:19.148 05:57:30 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:19.148 05:57:30 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:19.148 05:57:30 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:19.148 05:57:30 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 10317 ]] 00:05:19.148 05:57:30 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 10317 00:05:19.148 05:57:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:19.148 05:57:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.148 05:57:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 10317 00:05:19.148 05:57:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:19.714 05:57:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:19.714 05:57:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.714 05:57:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 10317 00:05:19.714 05:57:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.280 05:57:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.280 05:57:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.280 05:57:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 10317 00:05:20.280 05:57:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.847 05:57:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.847 05:57:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.847 05:57:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 10317 00:05:20.847 05:57:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.413 05:57:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.413 05:57:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.413 05:57:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 10317 00:05:21.413 05:57:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.674 05:57:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.674 05:57:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.674 05:57:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 10317 00:05:21.674 05:57:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:22.272 05:57:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:22.272 05:57:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.272 05:57:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 10317 00:05:22.272 05:57:33 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:22.272 05:57:33 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:22.272 05:57:33 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:22.272 05:57:33 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:22.272 SPDK target shutdown done 00:05:22.272 05:57:33 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:22.272 Success 00:05:22.272 00:05:22.272 real 0m4.703s 00:05:22.272 user 0m4.268s 00:05:22.272 sys 0m0.810s 00:05:22.272 05:57:33 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.272 05:57:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:22.272 ************************************ 00:05:22.272 END TEST json_config_extra_key 00:05:22.272 ************************************ 00:05:22.272 05:57:33 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:22.272 05:57:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.272 05:57:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.272 05:57:33 -- common/autotest_common.sh@10 -- # set +x 00:05:22.272 ************************************ 00:05:22.272 START TEST alias_rpc 00:05:22.272 ************************************ 00:05:22.272 05:57:33 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:22.272 * Looking for test storage... 00:05:22.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:22.272 05:57:33 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:22.272 05:57:33 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=10913 00:05:22.272 05:57:33 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.272 05:57:33 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 10913 00:05:22.272 05:57:33 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 10913 ']' 00:05:22.272 05:57:33 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.272 05:57:33 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.272 05:57:33 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.272 05:57:33 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.272 05:57:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.531 [2024-07-26 05:57:33.674826] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:22.531 [2024-07-26 05:57:33.674982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid10913 ] 00:05:22.531 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.531 [2024-07-26 05:57:33.793956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.789 [2024-07-26 05:57:34.049807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.724 05:57:34 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:23.724 05:57:34 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:23.724 05:57:34 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:23.982 05:57:35 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 10913 00:05:23.982 05:57:35 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 10913 ']' 00:05:23.982 05:57:35 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 10913 00:05:23.982 05:57:35 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:23.982 05:57:35 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:23.982 05:57:35 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 10913 00:05:23.982 05:57:35 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:23.982 05:57:35 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:23.982 05:57:35 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 10913' 00:05:23.982 killing process with pid 10913 00:05:23.982 05:57:35 alias_rpc -- common/autotest_common.sh@969 -- # kill 10913 00:05:23.982 05:57:35 alias_rpc -- common/autotest_common.sh@974 -- # wait 10913 00:05:26.515 00:05:26.515 real 0m4.285s 00:05:26.515 user 0m4.443s 00:05:26.515 sys 0m0.580s 00:05:26.515 05:57:37 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.515 05:57:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.515 ************************************ 00:05:26.515 END TEST alias_rpc 00:05:26.515 ************************************ 00:05:26.515 05:57:37 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:26.515 05:57:37 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:26.515 05:57:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.515 05:57:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.515 05:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:26.865 ************************************ 00:05:26.865 START TEST spdkcli_tcp 00:05:26.865 ************************************ 00:05:26.865 05:57:37 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:26.865 * Looking for test storage... 00:05:26.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:26.865 05:57:37 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:26.865 05:57:37 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:26.865 05:57:37 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:26.865 05:57:37 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:26.865 05:57:37 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:26.865 05:57:37 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:26.865 05:57:37 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:26.865 05:57:37 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:26.865 05:57:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.865 05:57:37 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=11493 00:05:26.865 05:57:37 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:26.865 05:57:37 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 11493 00:05:26.865 05:57:37 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 11493 ']' 00:05:26.865 05:57:37 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.865 05:57:37 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.865 05:57:37 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.865 05:57:37 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.865 05:57:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.865 [2024-07-26 05:57:38.029677] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:26.865 [2024-07-26 05:57:38.029845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid11493 ] 00:05:26.865 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.865 [2024-07-26 05:57:38.168211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.123 [2024-07-26 05:57:38.430344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.123 [2024-07-26 05:57:38.430351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.057 05:57:39 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.057 05:57:39 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:28.058 05:57:39 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=11640 00:05:28.058 05:57:39 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:28.058 05:57:39 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:28.316 [ 00:05:28.317 "bdev_malloc_delete", 00:05:28.317 "bdev_malloc_create", 00:05:28.317 "bdev_null_resize", 00:05:28.317 "bdev_null_delete", 00:05:28.317 "bdev_null_create", 00:05:28.317 "bdev_nvme_cuse_unregister", 00:05:28.317 "bdev_nvme_cuse_register", 00:05:28.317 "bdev_opal_new_user", 00:05:28.317 "bdev_opal_set_lock_state", 00:05:28.317 "bdev_opal_delete", 00:05:28.317 "bdev_opal_get_info", 00:05:28.317 "bdev_opal_create", 00:05:28.317 "bdev_nvme_opal_revert", 00:05:28.317 "bdev_nvme_opal_init", 00:05:28.317 "bdev_nvme_send_cmd", 00:05:28.317 "bdev_nvme_get_path_iostat", 00:05:28.317 "bdev_nvme_get_mdns_discovery_info", 00:05:28.317 "bdev_nvme_stop_mdns_discovery", 00:05:28.317 "bdev_nvme_start_mdns_discovery", 00:05:28.317 "bdev_nvme_set_multipath_policy", 00:05:28.317 "bdev_nvme_set_preferred_path", 00:05:28.317 "bdev_nvme_get_io_paths", 00:05:28.317 "bdev_nvme_remove_error_injection", 00:05:28.317 "bdev_nvme_add_error_injection", 00:05:28.317 "bdev_nvme_get_discovery_info", 00:05:28.317 "bdev_nvme_stop_discovery", 00:05:28.317 "bdev_nvme_start_discovery", 00:05:28.317 "bdev_nvme_get_controller_health_info", 00:05:28.317 "bdev_nvme_disable_controller", 00:05:28.317 "bdev_nvme_enable_controller", 00:05:28.317 "bdev_nvme_reset_controller", 00:05:28.317 "bdev_nvme_get_transport_statistics", 00:05:28.317 "bdev_nvme_apply_firmware", 00:05:28.317 "bdev_nvme_detach_controller", 00:05:28.317 "bdev_nvme_get_controllers", 00:05:28.317 "bdev_nvme_attach_controller", 00:05:28.317 "bdev_nvme_set_hotplug", 00:05:28.317 "bdev_nvme_set_options", 00:05:28.317 "bdev_passthru_delete", 00:05:28.317 "bdev_passthru_create", 00:05:28.317 "bdev_lvol_set_parent_bdev", 00:05:28.317 "bdev_lvol_set_parent", 00:05:28.317 "bdev_lvol_check_shallow_copy", 00:05:28.317 "bdev_lvol_start_shallow_copy", 00:05:28.317 "bdev_lvol_grow_lvstore", 00:05:28.317 "bdev_lvol_get_lvols", 00:05:28.317 "bdev_lvol_get_lvstores", 00:05:28.317 "bdev_lvol_delete", 00:05:28.317 "bdev_lvol_set_read_only", 00:05:28.317 "bdev_lvol_resize", 00:05:28.317 "bdev_lvol_decouple_parent", 00:05:28.317 "bdev_lvol_inflate", 00:05:28.317 "bdev_lvol_rename", 00:05:28.317 "bdev_lvol_clone_bdev", 00:05:28.317 "bdev_lvol_clone", 00:05:28.317 "bdev_lvol_snapshot", 00:05:28.317 "bdev_lvol_create", 00:05:28.317 "bdev_lvol_delete_lvstore", 00:05:28.317 "bdev_lvol_rename_lvstore", 00:05:28.317 "bdev_lvol_create_lvstore", 00:05:28.317 "bdev_raid_set_options", 00:05:28.317 "bdev_raid_remove_base_bdev", 00:05:28.317 "bdev_raid_add_base_bdev", 00:05:28.317 "bdev_raid_delete", 00:05:28.317 "bdev_raid_create", 00:05:28.317 "bdev_raid_get_bdevs", 00:05:28.317 "bdev_error_inject_error", 00:05:28.317 "bdev_error_delete", 00:05:28.317 "bdev_error_create", 00:05:28.317 "bdev_split_delete", 00:05:28.317 "bdev_split_create", 00:05:28.317 "bdev_delay_delete", 00:05:28.317 "bdev_delay_create", 00:05:28.317 "bdev_delay_update_latency", 00:05:28.317 "bdev_zone_block_delete", 00:05:28.317 "bdev_zone_block_create", 00:05:28.317 "blobfs_create", 00:05:28.317 "blobfs_detect", 00:05:28.317 "blobfs_set_cache_size", 00:05:28.317 "bdev_aio_delete", 00:05:28.317 "bdev_aio_rescan", 00:05:28.317 "bdev_aio_create", 00:05:28.317 "bdev_ftl_set_property", 00:05:28.317 "bdev_ftl_get_properties", 00:05:28.317 "bdev_ftl_get_stats", 00:05:28.317 "bdev_ftl_unmap", 00:05:28.317 "bdev_ftl_unload", 00:05:28.317 "bdev_ftl_delete", 00:05:28.317 "bdev_ftl_load", 00:05:28.317 "bdev_ftl_create", 00:05:28.317 "bdev_virtio_attach_controller", 00:05:28.317 "bdev_virtio_scsi_get_devices", 00:05:28.317 "bdev_virtio_detach_controller", 00:05:28.317 "bdev_virtio_blk_set_hotplug", 00:05:28.317 "bdev_iscsi_delete", 00:05:28.317 "bdev_iscsi_create", 00:05:28.317 "bdev_iscsi_set_options", 00:05:28.317 "accel_error_inject_error", 00:05:28.317 "ioat_scan_accel_module", 00:05:28.317 "dsa_scan_accel_module", 00:05:28.317 "iaa_scan_accel_module", 00:05:28.317 "keyring_file_remove_key", 00:05:28.317 "keyring_file_add_key", 00:05:28.317 "keyring_linux_set_options", 00:05:28.317 "iscsi_get_histogram", 00:05:28.317 "iscsi_enable_histogram", 00:05:28.317 "iscsi_set_options", 00:05:28.317 "iscsi_get_auth_groups", 00:05:28.317 "iscsi_auth_group_remove_secret", 00:05:28.317 "iscsi_auth_group_add_secret", 00:05:28.317 "iscsi_delete_auth_group", 00:05:28.317 "iscsi_create_auth_group", 00:05:28.317 "iscsi_set_discovery_auth", 00:05:28.317 "iscsi_get_options", 00:05:28.317 "iscsi_target_node_request_logout", 00:05:28.317 "iscsi_target_node_set_redirect", 00:05:28.317 "iscsi_target_node_set_auth", 00:05:28.317 "iscsi_target_node_add_lun", 00:05:28.317 "iscsi_get_stats", 00:05:28.317 "iscsi_get_connections", 00:05:28.317 "iscsi_portal_group_set_auth", 00:05:28.317 "iscsi_start_portal_group", 00:05:28.317 "iscsi_delete_portal_group", 00:05:28.317 "iscsi_create_portal_group", 00:05:28.317 "iscsi_get_portal_groups", 00:05:28.317 "iscsi_delete_target_node", 00:05:28.317 "iscsi_target_node_remove_pg_ig_maps", 00:05:28.317 "iscsi_target_node_add_pg_ig_maps", 00:05:28.317 "iscsi_create_target_node", 00:05:28.317 "iscsi_get_target_nodes", 00:05:28.317 "iscsi_delete_initiator_group", 00:05:28.317 "iscsi_initiator_group_remove_initiators", 00:05:28.317 "iscsi_initiator_group_add_initiators", 00:05:28.317 "iscsi_create_initiator_group", 00:05:28.317 "iscsi_get_initiator_groups", 00:05:28.317 "nvmf_set_crdt", 00:05:28.317 "nvmf_set_config", 00:05:28.317 "nvmf_set_max_subsystems", 00:05:28.317 "nvmf_stop_mdns_prr", 00:05:28.317 "nvmf_publish_mdns_prr", 00:05:28.317 "nvmf_subsystem_get_listeners", 00:05:28.317 "nvmf_subsystem_get_qpairs", 00:05:28.317 "nvmf_subsystem_get_controllers", 00:05:28.317 "nvmf_get_stats", 00:05:28.317 "nvmf_get_transports", 00:05:28.317 "nvmf_create_transport", 00:05:28.317 "nvmf_get_targets", 00:05:28.317 "nvmf_delete_target", 00:05:28.317 "nvmf_create_target", 00:05:28.317 "nvmf_subsystem_allow_any_host", 00:05:28.317 "nvmf_subsystem_remove_host", 00:05:28.317 "nvmf_subsystem_add_host", 00:05:28.317 "nvmf_ns_remove_host", 00:05:28.317 "nvmf_ns_add_host", 00:05:28.317 "nvmf_subsystem_remove_ns", 00:05:28.317 "nvmf_subsystem_add_ns", 00:05:28.317 "nvmf_subsystem_listener_set_ana_state", 00:05:28.317 "nvmf_discovery_get_referrals", 00:05:28.317 "nvmf_discovery_remove_referral", 00:05:28.317 "nvmf_discovery_add_referral", 00:05:28.317 "nvmf_subsystem_remove_listener", 00:05:28.317 "nvmf_subsystem_add_listener", 00:05:28.317 "nvmf_delete_subsystem", 00:05:28.317 "nvmf_create_subsystem", 00:05:28.317 "nvmf_get_subsystems", 00:05:28.317 "env_dpdk_get_mem_stats", 00:05:28.317 "nbd_get_disks", 00:05:28.317 "nbd_stop_disk", 00:05:28.317 "nbd_start_disk", 00:05:28.317 "ublk_recover_disk", 00:05:28.317 "ublk_get_disks", 00:05:28.317 "ublk_stop_disk", 00:05:28.317 "ublk_start_disk", 00:05:28.317 "ublk_destroy_target", 00:05:28.317 "ublk_create_target", 00:05:28.317 "virtio_blk_create_transport", 00:05:28.317 "virtio_blk_get_transports", 00:05:28.317 "vhost_controller_set_coalescing", 00:05:28.317 "vhost_get_controllers", 00:05:28.317 "vhost_delete_controller", 00:05:28.317 "vhost_create_blk_controller", 00:05:28.317 "vhost_scsi_controller_remove_target", 00:05:28.317 "vhost_scsi_controller_add_target", 00:05:28.317 "vhost_start_scsi_controller", 00:05:28.317 "vhost_create_scsi_controller", 00:05:28.317 "thread_set_cpumask", 00:05:28.317 "framework_get_governor", 00:05:28.317 "framework_get_scheduler", 00:05:28.317 "framework_set_scheduler", 00:05:28.317 "framework_get_reactors", 00:05:28.317 "thread_get_io_channels", 00:05:28.317 "thread_get_pollers", 00:05:28.317 "thread_get_stats", 00:05:28.317 "framework_monitor_context_switch", 00:05:28.317 "spdk_kill_instance", 00:05:28.317 "log_enable_timestamps", 00:05:28.317 "log_get_flags", 00:05:28.317 "log_clear_flag", 00:05:28.317 "log_set_flag", 00:05:28.317 "log_get_level", 00:05:28.317 "log_set_level", 00:05:28.317 "log_get_print_level", 00:05:28.317 "log_set_print_level", 00:05:28.317 "framework_enable_cpumask_locks", 00:05:28.317 "framework_disable_cpumask_locks", 00:05:28.317 "framework_wait_init", 00:05:28.317 "framework_start_init", 00:05:28.317 "scsi_get_devices", 00:05:28.317 "bdev_get_histogram", 00:05:28.317 "bdev_enable_histogram", 00:05:28.317 "bdev_set_qos_limit", 00:05:28.317 "bdev_set_qd_sampling_period", 00:05:28.317 "bdev_get_bdevs", 00:05:28.317 "bdev_reset_iostat", 00:05:28.317 "bdev_get_iostat", 00:05:28.317 "bdev_examine", 00:05:28.317 "bdev_wait_for_examine", 00:05:28.317 "bdev_set_options", 00:05:28.317 "notify_get_notifications", 00:05:28.317 "notify_get_types", 00:05:28.317 "accel_get_stats", 00:05:28.317 "accel_set_options", 00:05:28.317 "accel_set_driver", 00:05:28.317 "accel_crypto_key_destroy", 00:05:28.317 "accel_crypto_keys_get", 00:05:28.318 "accel_crypto_key_create", 00:05:28.318 "accel_assign_opc", 00:05:28.318 "accel_get_module_info", 00:05:28.318 "accel_get_opc_assignments", 00:05:28.318 "vmd_rescan", 00:05:28.318 "vmd_remove_device", 00:05:28.318 "vmd_enable", 00:05:28.318 "sock_get_default_impl", 00:05:28.318 "sock_set_default_impl", 00:05:28.318 "sock_impl_set_options", 00:05:28.318 "sock_impl_get_options", 00:05:28.318 "iobuf_get_stats", 00:05:28.318 "iobuf_set_options", 00:05:28.318 "framework_get_pci_devices", 00:05:28.318 "framework_get_config", 00:05:28.318 "framework_get_subsystems", 00:05:28.318 "trace_get_info", 00:05:28.318 "trace_get_tpoint_group_mask", 00:05:28.318 "trace_disable_tpoint_group", 00:05:28.318 "trace_enable_tpoint_group", 00:05:28.318 "trace_clear_tpoint_mask", 00:05:28.318 "trace_set_tpoint_mask", 00:05:28.318 "keyring_get_keys", 00:05:28.318 "spdk_get_version", 00:05:28.318 "rpc_get_methods" 00:05:28.318 ] 00:05:28.318 05:57:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:28.318 05:57:39 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:28.318 05:57:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.318 05:57:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:28.318 05:57:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 11493 00:05:28.318 05:57:39 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 11493 ']' 00:05:28.318 05:57:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 11493 00:05:28.318 05:57:39 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:28.318 05:57:39 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:28.318 05:57:39 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 11493 00:05:28.318 05:57:39 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:28.318 05:57:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:28.318 05:57:39 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 11493' 00:05:28.318 killing process with pid 11493 00:05:28.318 05:57:39 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 11493 00:05:28.318 05:57:39 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 11493 00:05:30.846 00:05:30.846 real 0m4.246s 00:05:30.846 user 0m7.522s 00:05:30.846 sys 0m0.692s 00:05:30.846 05:57:42 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.846 05:57:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.846 ************************************ 00:05:30.846 END TEST spdkcli_tcp 00:05:30.846 ************************************ 00:05:30.846 05:57:42 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:30.846 05:57:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.846 05:57:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.846 05:57:42 -- common/autotest_common.sh@10 -- # set +x 00:05:30.846 ************************************ 00:05:30.846 START TEST dpdk_mem_utility 00:05:30.846 ************************************ 00:05:30.846 05:57:42 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:31.104 * Looking for test storage... 00:05:31.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:31.104 05:57:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:31.104 05:57:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=12092 00:05:31.104 05:57:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.104 05:57:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 12092 00:05:31.104 05:57:42 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 12092 ']' 00:05:31.104 05:57:42 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.104 05:57:42 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.104 05:57:42 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.104 05:57:42 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.104 05:57:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:31.104 [2024-07-26 05:57:42.299030] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:31.104 [2024-07-26 05:57:42.299180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid12092 ] 00:05:31.104 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.104 [2024-07-26 05:57:42.427065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.363 [2024-07-26 05:57:42.686282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.296 05:57:43 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.296 05:57:43 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:32.296 05:57:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:32.296 05:57:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:32.296 05:57:43 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.296 05:57:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:32.296 { 00:05:32.296 "filename": "/tmp/spdk_mem_dump.txt" 00:05:32.296 } 00:05:32.296 05:57:43 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.296 05:57:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:32.555 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:32.555 1 heaps totaling size 820.000000 MiB 00:05:32.555 size: 820.000000 MiB heap id: 0 00:05:32.555 end heaps---------- 00:05:32.555 8 mempools totaling size 598.116089 MiB 00:05:32.555 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:32.555 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:32.555 size: 84.521057 MiB name: bdev_io_12092 00:05:32.555 size: 51.011292 MiB name: evtpool_12092 00:05:32.555 size: 50.003479 MiB name: msgpool_12092 00:05:32.555 size: 21.763794 MiB name: PDU_Pool 00:05:32.555 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:32.555 size: 0.026123 MiB name: Session_Pool 00:05:32.555 end mempools------- 00:05:32.555 6 memzones totaling size 4.142822 MiB 00:05:32.555 size: 1.000366 MiB name: RG_ring_0_12092 00:05:32.555 size: 1.000366 MiB name: RG_ring_1_12092 00:05:32.555 size: 1.000366 MiB name: RG_ring_4_12092 00:05:32.555 size: 1.000366 MiB name: RG_ring_5_12092 00:05:32.555 size: 0.125366 MiB name: RG_ring_2_12092 00:05:32.555 size: 0.015991 MiB name: RG_ring_3_12092 00:05:32.555 end memzones------- 00:05:32.555 05:57:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:32.555 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:05:32.555 list of free elements. size: 18.514832 MiB 00:05:32.555 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:32.555 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:32.555 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:32.555 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:32.555 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:32.555 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:32.555 element at address: 0x200019600000 with size: 0.999329 MiB 00:05:32.555 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:32.555 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:32.555 element at address: 0x200018e00000 with size: 0.959900 MiB 00:05:32.555 element at address: 0x200019900040 with size: 0.937256 MiB 00:05:32.555 element at address: 0x200000200000 with size: 0.840942 MiB 00:05:32.555 element at address: 0x20001b000000 with size: 0.583191 MiB 00:05:32.555 element at address: 0x200019200000 with size: 0.491150 MiB 00:05:32.555 element at address: 0x200019a00000 with size: 0.485657 MiB 00:05:32.555 element at address: 0x200013800000 with size: 0.470581 MiB 00:05:32.555 element at address: 0x200028400000 with size: 0.411072 MiB 00:05:32.555 element at address: 0x200003a00000 with size: 0.356140 MiB 00:05:32.555 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:05:32.555 list of standard malloc elements. size: 199.220764 MiB 00:05:32.555 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:32.555 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:32.555 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:32.555 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:32.555 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:32.555 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:32.555 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:32.555 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:32.555 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:05:32.555 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:05:32.555 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:05:32.555 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:05:32.555 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:05:32.555 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:32.555 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:32.555 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:32.555 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:32.555 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:32.555 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:32.555 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:32.556 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:05:32.556 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:05:32.556 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:05:32.556 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:05:32.556 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:05:32.556 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:05:32.556 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:32.556 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:32.556 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:32.556 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:32.556 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:05:32.556 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:05:32.556 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:05:32.556 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:05:32.556 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:05:32.556 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:05:32.556 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:05:32.556 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:05:32.556 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:32.556 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:32.556 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:32.556 list of memzone associated elements. size: 602.264404 MiB 00:05:32.556 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:32.556 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:32.556 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:32.556 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:32.556 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:32.556 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_12092_0 00:05:32.556 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:32.556 associated memzone info: size: 48.002930 MiB name: MP_evtpool_12092_0 00:05:32.556 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:32.556 associated memzone info: size: 48.002930 MiB name: MP_msgpool_12092_0 00:05:32.556 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:32.556 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:32.556 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:32.556 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:32.556 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:32.556 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_12092 00:05:32.556 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:32.556 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_12092 00:05:32.556 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:32.556 associated memzone info: size: 1.007996 MiB name: MP_evtpool_12092 00:05:32.556 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:32.556 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:32.556 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:32.556 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:32.556 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:32.556 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:32.556 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:32.556 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:32.556 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:32.556 associated memzone info: size: 1.000366 MiB name: RG_ring_0_12092 00:05:32.556 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:32.556 associated memzone info: size: 1.000366 MiB name: RG_ring_1_12092 00:05:32.556 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:32.556 associated memzone info: size: 1.000366 MiB name: RG_ring_4_12092 00:05:32.556 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:32.556 associated memzone info: size: 1.000366 MiB name: RG_ring_5_12092 00:05:32.556 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:32.556 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_12092 00:05:32.556 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:05:32.556 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:32.556 element at address: 0x200013878780 with size: 0.500549 MiB 00:05:32.556 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:32.556 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:05:32.556 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:32.556 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:32.556 associated memzone info: size: 0.125366 MiB name: RG_ring_2_12092 00:05:32.556 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:05:32.556 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:32.556 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:05:32.556 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:32.556 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:32.556 associated memzone info: size: 0.015991 MiB name: RG_ring_3_12092 00:05:32.556 element at address: 0x20002846f540 with size: 0.002502 MiB 00:05:32.556 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:32.556 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:05:32.556 associated memzone info: size: 0.000183 MiB name: MP_msgpool_12092 00:05:32.556 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:32.556 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_12092 00:05:32.556 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:05:32.556 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:32.556 05:57:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:32.556 05:57:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 12092 00:05:32.556 05:57:43 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 12092 ']' 00:05:32.556 05:57:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 12092 00:05:32.556 05:57:43 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:32.556 05:57:43 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.556 05:57:43 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 12092 00:05:32.556 05:57:43 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.556 05:57:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.556 05:57:43 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 12092' 00:05:32.556 killing process with pid 12092 00:05:32.556 05:57:43 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 12092 00:05:32.556 05:57:43 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 12092 00:05:35.086 00:05:35.086 real 0m4.106s 00:05:35.086 user 0m4.132s 00:05:35.086 sys 0m0.580s 00:05:35.086 05:57:46 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.086 05:57:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:35.086 ************************************ 00:05:35.086 END TEST dpdk_mem_utility 00:05:35.086 ************************************ 00:05:35.086 05:57:46 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:35.086 05:57:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.086 05:57:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.086 05:57:46 -- common/autotest_common.sh@10 -- # set +x 00:05:35.086 ************************************ 00:05:35.086 START TEST event 00:05:35.086 ************************************ 00:05:35.086 05:57:46 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:35.086 * Looking for test storage... 00:05:35.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:35.086 05:57:46 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:35.086 05:57:46 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:35.086 05:57:46 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:35.086 05:57:46 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:35.086 05:57:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.086 05:57:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.086 ************************************ 00:05:35.086 START TEST event_perf 00:05:35.086 ************************************ 00:05:35.086 05:57:46 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:35.344 Running I/O for 1 seconds...[2024-07-26 05:57:46.423317] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:35.344 [2024-07-26 05:57:46.423424] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid12565 ] 00:05:35.344 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.344 [2024-07-26 05:57:46.549957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:35.601 [2024-07-26 05:57:46.817848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.602 [2024-07-26 05:57:46.817906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.602 [2024-07-26 05:57:46.817952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.602 [2024-07-26 05:57:46.817963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.973 Running I/O for 1 seconds... 00:05:36.973 lcore 0: 199030 00:05:36.973 lcore 1: 199028 00:05:36.973 lcore 2: 199029 00:05:36.973 lcore 3: 199030 00:05:36.973 done. 00:05:36.973 00:05:36.973 real 0m1.902s 00:05:36.973 user 0m4.708s 00:05:36.973 sys 0m0.177s 00:05:36.973 05:57:48 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.973 05:57:48 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:36.973 ************************************ 00:05:36.973 END TEST event_perf 00:05:36.973 ************************************ 00:05:37.231 05:57:48 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:37.231 05:57:48 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:37.231 05:57:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.231 05:57:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.231 ************************************ 00:05:37.231 START TEST event_reactor 00:05:37.231 ************************************ 00:05:37.231 05:57:48 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:37.231 [2024-07-26 05:57:48.377743] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:37.231 [2024-07-26 05:57:48.377854] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid12847 ] 00:05:37.231 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.231 [2024-07-26 05:57:48.506922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.489 [2024-07-26 05:57:48.767561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.387 test_start 00:05:39.387 oneshot 00:05:39.387 tick 100 00:05:39.387 tick 100 00:05:39.387 tick 250 00:05:39.387 tick 100 00:05:39.387 tick 100 00:05:39.387 tick 100 00:05:39.387 tick 250 00:05:39.387 tick 500 00:05:39.387 tick 100 00:05:39.387 tick 100 00:05:39.387 tick 250 00:05:39.387 tick 100 00:05:39.387 tick 100 00:05:39.387 test_end 00:05:39.387 00:05:39.387 real 0m1.892s 00:05:39.387 user 0m1.726s 00:05:39.387 sys 0m0.155s 00:05:39.387 05:57:50 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.387 05:57:50 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:39.387 ************************************ 00:05:39.387 END TEST event_reactor 00:05:39.387 ************************************ 00:05:39.387 05:57:50 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:39.387 05:57:50 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:39.387 05:57:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.387 05:57:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.387 ************************************ 00:05:39.387 START TEST event_reactor_perf 00:05:39.387 ************************************ 00:05:39.387 05:57:50 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:39.387 [2024-07-26 05:57:50.318405] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:39.387 [2024-07-26 05:57:50.318517] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid13129 ] 00:05:39.387 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.387 [2024-07-26 05:57:50.437665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.387 [2024-07-26 05:57:50.699039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.317 test_start 00:05:41.317 test_end 00:05:41.317 Performance: 269212 events per second 00:05:41.317 00:05:41.317 real 0m1.879s 00:05:41.317 user 0m1.717s 00:05:41.317 sys 0m0.152s 00:05:41.317 05:57:52 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.317 05:57:52 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:41.317 ************************************ 00:05:41.317 END TEST event_reactor_perf 00:05:41.317 ************************************ 00:05:41.317 05:57:52 event -- event/event.sh@49 -- # uname -s 00:05:41.317 05:57:52 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:41.317 05:57:52 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:41.317 05:57:52 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.317 05:57:52 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.317 05:57:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.317 ************************************ 00:05:41.317 START TEST event_scheduler 00:05:41.317 ************************************ 00:05:41.317 05:57:52 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:41.317 * Looking for test storage... 00:05:41.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:41.317 05:57:52 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:41.317 05:57:52 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=13327 00:05:41.317 05:57:52 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:41.317 05:57:52 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.317 05:57:52 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 13327 00:05:41.317 05:57:52 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 13327 ']' 00:05:41.317 05:57:52 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.317 05:57:52 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.317 05:57:52 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.317 05:57:52 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.317 05:57:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:41.317 [2024-07-26 05:57:52.344828] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:41.317 [2024-07-26 05:57:52.344993] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid13327 ] 00:05:41.317 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.317 [2024-07-26 05:57:52.467663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:41.574 [2024-07-26 05:57:52.688745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.574 [2024-07-26 05:57:52.688791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.574 [2024-07-26 05:57:52.688857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.574 [2024-07-26 05:57:52.688863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.150 05:57:53 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.150 05:57:53 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:42.150 05:57:53 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:42.150 05:57:53 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.150 05:57:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.150 [2024-07-26 05:57:53.275594] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:42.150 [2024-07-26 05:57:53.275665] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:42.150 [2024-07-26 05:57:53.275700] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:42.150 [2024-07-26 05:57:53.275724] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:42.150 [2024-07-26 05:57:53.275747] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:42.150 05:57:53 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.150 05:57:53 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:42.150 05:57:53 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.150 05:57:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.414 [2024-07-26 05:57:53.587252] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:42.414 05:57:53 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.414 05:57:53 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:42.414 05:57:53 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.414 05:57:53 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.414 05:57:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.414 ************************************ 00:05:42.414 START TEST scheduler_create_thread 00:05:42.414 ************************************ 00:05:42.414 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:42.414 05:57:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:42.414 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.414 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.414 2 00:05:42.414 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.414 05:57:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:42.414 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.414 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.414 3 00:05:42.414 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.415 4 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.415 5 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.415 6 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.415 7 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.415 8 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.415 9 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.415 10 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.415 05:57:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.980 05:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.980 00:05:42.980 real 0m0.595s 00:05:42.980 user 0m0.013s 00:05:42.980 sys 0m0.002s 00:05:42.980 05:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.980 05:57:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.980 ************************************ 00:05:42.980 END TEST scheduler_create_thread 00:05:42.980 ************************************ 00:05:42.980 05:57:54 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:42.980 05:57:54 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 13327 00:05:42.980 05:57:54 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 13327 ']' 00:05:42.980 05:57:54 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 13327 00:05:42.980 05:57:54 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:42.980 05:57:54 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.980 05:57:54 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 13327 00:05:42.980 05:57:54 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:42.980 05:57:54 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:42.980 05:57:54 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 13327' 00:05:42.980 killing process with pid 13327 00:05:42.980 05:57:54 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 13327 00:05:42.980 05:57:54 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 13327 00:05:43.545 [2024-07-26 05:57:54.691646] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:44.916 00:05:44.916 real 0m3.644s 00:05:44.916 user 0m7.019s 00:05:44.916 sys 0m0.473s 00:05:44.916 05:57:55 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.916 05:57:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.916 ************************************ 00:05:44.916 END TEST event_scheduler 00:05:44.916 ************************************ 00:05:44.916 05:57:55 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:44.916 05:57:55 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:44.916 05:57:55 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.916 05:57:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.916 05:57:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.916 ************************************ 00:05:44.916 START TEST app_repeat 00:05:44.916 ************************************ 00:05:44.916 05:57:55 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:44.916 05:57:55 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.916 05:57:55 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.916 05:57:55 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:44.916 05:57:55 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.916 05:57:55 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:44.916 05:57:55 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:44.916 05:57:55 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:44.916 05:57:55 event.app_repeat -- event/event.sh@19 -- # repeat_pid=13892 00:05:44.916 05:57:55 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:44.916 05:57:55 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.916 05:57:55 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 13892' 00:05:44.916 Process app_repeat pid: 13892 00:05:44.916 05:57:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:44.916 05:57:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:44.916 spdk_app_start Round 0 00:05:44.916 05:57:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 13892 /var/tmp/spdk-nbd.sock 00:05:44.916 05:57:55 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 13892 ']' 00:05:44.916 05:57:55 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.916 05:57:55 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.916 05:57:55 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.916 05:57:55 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.916 05:57:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.916 [2024-07-26 05:57:55.963093] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:44.916 [2024-07-26 05:57:55.963240] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid13892 ] 00:05:44.916 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.916 [2024-07-26 05:57:56.090650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.175 [2024-07-26 05:57:56.340901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.175 [2024-07-26 05:57:56.340907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.741 05:57:56 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.741 05:57:56 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:45.741 05:57:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.999 Malloc0 00:05:45.999 05:57:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.257 Malloc1 00:05:46.257 05:57:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.257 05:57:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.257 05:57:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.257 05:57:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.257 05:57:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.257 05:57:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.257 05:57:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.257 05:57:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.257 05:57:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.257 05:57:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.257 05:57:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.257 05:57:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.257 05:57:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:46.257 05:57:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.257 05:57:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.257 05:57:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.516 /dev/nbd0 00:05:46.516 05:57:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.516 05:57:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.516 05:57:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:46.516 05:57:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:46.516 05:57:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:46.516 05:57:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:46.516 05:57:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:46.516 05:57:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:46.516 05:57:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:46.516 05:57:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:46.516 05:57:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.774 1+0 records in 00:05:46.774 1+0 records out 00:05:46.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193822 s, 21.1 MB/s 00:05:46.774 05:57:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.774 05:57:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:46.774 05:57:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.774 05:57:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:46.774 05:57:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:46.774 05:57:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.774 05:57:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.774 05:57:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:47.032 /dev/nbd1 00:05:47.032 05:57:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:47.032 05:57:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:47.032 05:57:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:47.032 05:57:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:47.032 05:57:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:47.032 05:57:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:47.032 05:57:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:47.032 05:57:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:47.032 05:57:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:47.032 05:57:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:47.032 05:57:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.032 1+0 records in 00:05:47.032 1+0 records out 00:05:47.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228036 s, 18.0 MB/s 00:05:47.032 05:57:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:47.032 05:57:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:47.032 05:57:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:47.032 05:57:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:47.032 05:57:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:47.032 05:57:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.032 05:57:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.032 05:57:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.032 05:57:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.032 05:57:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:47.290 { 00:05:47.290 "nbd_device": "/dev/nbd0", 00:05:47.290 "bdev_name": "Malloc0" 00:05:47.290 }, 00:05:47.290 { 00:05:47.290 "nbd_device": "/dev/nbd1", 00:05:47.290 "bdev_name": "Malloc1" 00:05:47.290 } 00:05:47.290 ]' 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:47.290 { 00:05:47.290 "nbd_device": "/dev/nbd0", 00:05:47.290 "bdev_name": "Malloc0" 00:05:47.290 }, 00:05:47.290 { 00:05:47.290 "nbd_device": "/dev/nbd1", 00:05:47.290 "bdev_name": "Malloc1" 00:05:47.290 } 00:05:47.290 ]' 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:47.290 /dev/nbd1' 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:47.290 /dev/nbd1' 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:47.290 256+0 records in 00:05:47.290 256+0 records out 00:05:47.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503662 s, 208 MB/s 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:47.290 256+0 records in 00:05:47.290 256+0 records out 00:05:47.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024688 s, 42.5 MB/s 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:47.290 256+0 records in 00:05:47.290 256+0 records out 00:05:47.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029527 s, 35.5 MB/s 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.290 05:57:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.548 05:57:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.548 05:57:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.548 05:57:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.548 05:57:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.548 05:57:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.548 05:57:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.548 05:57:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.548 05:57:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.548 05:57:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.548 05:57:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.806 05:57:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.806 05:57:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.806 05:57:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.806 05:57:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.806 05:57:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.806 05:57:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.806 05:57:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.806 05:57:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.806 05:57:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.806 05:57:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.806 05:57:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.064 05:57:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:48.064 05:57:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:48.064 05:57:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.064 05:57:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:48.064 05:57:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:48.064 05:57:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.064 05:57:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:48.064 05:57:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:48.064 05:57:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:48.064 05:57:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:48.064 05:57:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:48.064 05:57:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:48.064 05:57:59 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:48.631 05:57:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:50.006 [2024-07-26 05:58:01.168275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.264 [2024-07-26 05:58:01.422943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.265 [2024-07-26 05:58:01.422942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.523 [2024-07-26 05:58:01.643410] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:50.523 [2024-07-26 05:58:01.643498] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:51.455 05:58:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:51.455 05:58:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:51.455 spdk_app_start Round 1 00:05:51.455 05:58:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 13892 /var/tmp/spdk-nbd.sock 00:05:51.455 05:58:02 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 13892 ']' 00:05:51.455 05:58:02 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.455 05:58:02 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.455 05:58:02 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.455 05:58:02 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.455 05:58:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.713 05:58:03 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.713 05:58:03 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:51.713 05:58:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.278 Malloc0 00:05:52.278 05:58:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.535 Malloc1 00:05:52.535 05:58:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.535 05:58:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.535 05:58:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.535 05:58:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:52.535 05:58:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.535 05:58:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:52.535 05:58:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.535 05:58:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.535 05:58:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.535 05:58:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:52.535 05:58:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.535 05:58:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:52.535 05:58:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:52.535 05:58:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:52.536 05:58:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.536 05:58:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:52.794 /dev/nbd0 00:05:52.794 05:58:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:52.794 05:58:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:52.794 05:58:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:52.794 05:58:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:52.794 05:58:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:52.794 05:58:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:52.794 05:58:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:52.794 05:58:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:52.794 05:58:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:52.794 05:58:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:52.794 05:58:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.794 1+0 records in 00:05:52.794 1+0 records out 00:05:52.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205225 s, 20.0 MB/s 00:05:52.794 05:58:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.794 05:58:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:52.794 05:58:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.794 05:58:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:52.794 05:58:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:52.794 05:58:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.794 05:58:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.794 05:58:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:53.052 /dev/nbd1 00:05:53.052 05:58:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:53.052 05:58:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:53.052 05:58:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:53.052 05:58:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:53.052 05:58:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:53.052 05:58:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:53.052 05:58:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:53.052 05:58:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:53.052 05:58:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:53.052 05:58:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:53.052 05:58:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.052 1+0 records in 00:05:53.052 1+0 records out 00:05:53.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190082 s, 21.5 MB/s 00:05:53.052 05:58:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.052 05:58:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:53.052 05:58:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.052 05:58:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:53.052 05:58:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:53.052 05:58:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.052 05:58:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.052 05:58:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.052 05:58:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.052 05:58:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:53.310 { 00:05:53.310 "nbd_device": "/dev/nbd0", 00:05:53.310 "bdev_name": "Malloc0" 00:05:53.310 }, 00:05:53.310 { 00:05:53.310 "nbd_device": "/dev/nbd1", 00:05:53.310 "bdev_name": "Malloc1" 00:05:53.310 } 00:05:53.310 ]' 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:53.310 { 00:05:53.310 "nbd_device": "/dev/nbd0", 00:05:53.310 "bdev_name": "Malloc0" 00:05:53.310 }, 00:05:53.310 { 00:05:53.310 "nbd_device": "/dev/nbd1", 00:05:53.310 "bdev_name": "Malloc1" 00:05:53.310 } 00:05:53.310 ]' 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:53.310 /dev/nbd1' 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:53.310 /dev/nbd1' 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:53.310 256+0 records in 00:05:53.310 256+0 records out 00:05:53.310 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00502442 s, 209 MB/s 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:53.310 256+0 records in 00:05:53.310 256+0 records out 00:05:53.310 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025395 s, 41.3 MB/s 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:53.310 256+0 records in 00:05:53.310 256+0 records out 00:05:53.310 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282679 s, 37.1 MB/s 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.310 05:58:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:53.567 05:58:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:53.567 05:58:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:53.567 05:58:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:53.567 05:58:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.567 05:58:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.568 05:58:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:53.568 05:58:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.568 05:58:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.568 05:58:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.568 05:58:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:53.825 05:58:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:53.825 05:58:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:53.825 05:58:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:53.825 05:58:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.825 05:58:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.825 05:58:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:53.825 05:58:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.825 05:58:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.825 05:58:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.825 05:58:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.825 05:58:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.083 05:58:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:54.083 05:58:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:54.083 05:58:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.083 05:58:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:54.083 05:58:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:54.083 05:58:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.083 05:58:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:54.342 05:58:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:54.342 05:58:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:54.342 05:58:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:54.342 05:58:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:54.342 05:58:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:54.342 05:58:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:54.600 05:58:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:55.992 [2024-07-26 05:58:07.238723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.271 [2024-07-26 05:58:07.494311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.271 [2024-07-26 05:58:07.494312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.530 [2024-07-26 05:58:07.703464] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.530 [2024-07-26 05:58:07.703548] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:57.903 05:58:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:57.903 05:58:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:57.903 spdk_app_start Round 2 00:05:57.903 05:58:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 13892 /var/tmp/spdk-nbd.sock 00:05:57.903 05:58:08 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 13892 ']' 00:05:57.903 05:58:08 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.903 05:58:08 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.903 05:58:08 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.903 05:58:08 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.903 05:58:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.903 05:58:09 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.903 05:58:09 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:57.903 05:58:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.162 Malloc0 00:05:58.162 05:58:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.422 Malloc1 00:05:58.422 05:58:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.422 05:58:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.422 05:58:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.422 05:58:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.422 05:58:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.422 05:58:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.422 05:58:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.422 05:58:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.422 05:58:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.422 05:58:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.422 05:58:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.422 05:58:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.422 05:58:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:58.422 05:58:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.422 05:58:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.422 05:58:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:58.680 /dev/nbd0 00:05:58.680 05:58:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.680 05:58:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.680 05:58:09 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:58.680 05:58:09 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:58.680 05:58:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:58.680 05:58:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:58.680 05:58:09 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:58.680 05:58:09 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:58.680 05:58:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:58.680 05:58:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:58.680 05:58:09 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.680 1+0 records in 00:05:58.681 1+0 records out 00:05:58.681 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191129 s, 21.4 MB/s 00:05:58.681 05:58:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.681 05:58:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:58.681 05:58:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.681 05:58:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:58.681 05:58:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:58.681 05:58:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.681 05:58:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.681 05:58:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:58.938 /dev/nbd1 00:05:58.938 05:58:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:58.938 05:58:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:58.938 05:58:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:58.938 05:58:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:58.938 05:58:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:58.938 05:58:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:58.938 05:58:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:59.196 05:58:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:59.196 05:58:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:59.196 05:58:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:59.196 05:58:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.196 1+0 records in 00:05:59.196 1+0 records out 00:05:59.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000175512 s, 23.3 MB/s 00:05:59.196 05:58:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.196 05:58:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:59.196 05:58:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.196 05:58:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:59.196 05:58:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:59.196 05:58:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.196 05:58:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.196 05:58:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.196 05:58:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.196 05:58:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:59.455 { 00:05:59.455 "nbd_device": "/dev/nbd0", 00:05:59.455 "bdev_name": "Malloc0" 00:05:59.455 }, 00:05:59.455 { 00:05:59.455 "nbd_device": "/dev/nbd1", 00:05:59.455 "bdev_name": "Malloc1" 00:05:59.455 } 00:05:59.455 ]' 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:59.455 { 00:05:59.455 "nbd_device": "/dev/nbd0", 00:05:59.455 "bdev_name": "Malloc0" 00:05:59.455 }, 00:05:59.455 { 00:05:59.455 "nbd_device": "/dev/nbd1", 00:05:59.455 "bdev_name": "Malloc1" 00:05:59.455 } 00:05:59.455 ]' 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:59.455 /dev/nbd1' 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:59.455 /dev/nbd1' 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:59.455 256+0 records in 00:05:59.455 256+0 records out 00:05:59.455 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00511357 s, 205 MB/s 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:59.455 256+0 records in 00:05:59.455 256+0 records out 00:05:59.455 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242226 s, 43.3 MB/s 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:59.455 256+0 records in 00:05:59.455 256+0 records out 00:05:59.455 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0325902 s, 32.2 MB/s 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.455 05:58:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.713 05:58:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.713 05:58:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.713 05:58:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.713 05:58:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.713 05:58:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.713 05:58:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.713 05:58:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.713 05:58:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.713 05:58:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.713 05:58:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.970 05:58:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.970 05:58:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.970 05:58:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.970 05:58:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.970 05:58:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.970 05:58:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.970 05:58:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.970 05:58:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.971 05:58:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.971 05:58:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.971 05:58:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.229 05:58:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.229 05:58:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.229 05:58:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.229 05:58:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.229 05:58:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.229 05:58:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.229 05:58:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:00.229 05:58:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.229 05:58:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.229 05:58:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:00.229 05:58:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:00.229 05:58:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:00.229 05:58:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:00.795 05:58:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:02.185 [2024-07-26 05:58:13.331417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.446 [2024-07-26 05:58:13.585237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.446 [2024-07-26 05:58:13.585240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.704 [2024-07-26 05:58:13.803427] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:02.704 [2024-07-26 05:58:13.803513] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:03.636 05:58:14 event.app_repeat -- event/event.sh@38 -- # waitforlisten 13892 /var/tmp/spdk-nbd.sock 00:06:03.636 05:58:14 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 13892 ']' 00:06:03.636 05:58:14 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.636 05:58:14 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.636 05:58:14 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.636 05:58:14 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.636 05:58:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:03.894 05:58:15 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.894 05:58:15 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:03.894 05:58:15 event.app_repeat -- event/event.sh@39 -- # killprocess 13892 00:06:03.894 05:58:15 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 13892 ']' 00:06:03.894 05:58:15 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 13892 00:06:03.894 05:58:15 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:03.894 05:58:15 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.894 05:58:15 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 13892 00:06:03.894 05:58:15 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.894 05:58:15 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.894 05:58:15 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 13892' 00:06:03.894 killing process with pid 13892 00:06:03.894 05:58:15 event.app_repeat -- common/autotest_common.sh@969 -- # kill 13892 00:06:03.894 05:58:15 event.app_repeat -- common/autotest_common.sh@974 -- # wait 13892 00:06:05.267 spdk_app_start is called in Round 0. 00:06:05.267 Shutdown signal received, stop current app iteration 00:06:05.267 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:06:05.267 spdk_app_start is called in Round 1. 00:06:05.267 Shutdown signal received, stop current app iteration 00:06:05.267 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:06:05.267 spdk_app_start is called in Round 2. 00:06:05.267 Shutdown signal received, stop current app iteration 00:06:05.267 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:06:05.267 spdk_app_start is called in Round 3. 00:06:05.267 Shutdown signal received, stop current app iteration 00:06:05.267 05:58:16 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:05.267 05:58:16 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:05.267 00:06:05.267 real 0m20.566s 00:06:05.267 user 0m42.135s 00:06:05.267 sys 0m3.350s 00:06:05.267 05:58:16 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.267 05:58:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.267 ************************************ 00:06:05.267 END TEST app_repeat 00:06:05.267 ************************************ 00:06:05.267 05:58:16 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:05.267 05:58:16 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:05.267 05:58:16 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.267 05:58:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.267 05:58:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.267 ************************************ 00:06:05.267 START TEST cpu_locks 00:06:05.267 ************************************ 00:06:05.267 05:58:16 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:05.267 * Looking for test storage... 00:06:05.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:05.267 05:58:16 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:05.267 05:58:16 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:05.267 05:58:16 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:05.267 05:58:16 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:05.267 05:58:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.267 05:58:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.267 05:58:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.525 ************************************ 00:06:05.525 START TEST default_locks 00:06:05.525 ************************************ 00:06:05.525 05:58:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:05.525 05:58:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=16517 00:06:05.525 05:58:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.525 05:58:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 16517 00:06:05.525 05:58:16 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 16517 ']' 00:06:05.525 05:58:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.525 05:58:16 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.525 05:58:16 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.525 05:58:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.525 05:58:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.525 [2024-07-26 05:58:16.692250] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:05.525 [2024-07-26 05:58:16.692385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid16517 ] 00:06:05.525 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.525 [2024-07-26 05:58:16.817744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.783 [2024-07-26 05:58:17.071682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.716 05:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.716 05:58:17 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:06.716 05:58:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 16517 00:06:06.716 05:58:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 16517 00:06:06.716 05:58:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.974 lslocks: write error 00:06:06.974 05:58:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 16517 00:06:06.974 05:58:18 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 16517 ']' 00:06:06.974 05:58:18 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 16517 00:06:06.974 05:58:18 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:06.974 05:58:18 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.974 05:58:18 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 16517 00:06:06.974 05:58:18 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.974 05:58:18 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.974 05:58:18 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 16517' 00:06:06.974 killing process with pid 16517 00:06:06.974 05:58:18 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 16517 00:06:06.974 05:58:18 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 16517 00:06:09.503 05:58:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 16517 00:06:09.503 05:58:20 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:09.503 05:58:20 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 16517 00:06:09.503 05:58:20 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:09.503 05:58:20 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.503 05:58:20 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:09.503 05:58:20 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.503 05:58:20 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 16517 00:06:09.503 05:58:20 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 16517 ']' 00:06:09.503 05:58:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.503 05:58:20 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.503 05:58:20 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.503 05:58:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.503 05:58:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (16517) - No such process 00:06:09.503 ERROR: process (pid: 16517) is no longer running 00:06:09.503 05:58:20 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.503 05:58:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:09.503 05:58:20 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:09.503 05:58:20 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.503 05:58:20 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:09.503 05:58:20 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.503 05:58:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:09.503 05:58:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:09.503 05:58:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:09.759 05:58:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:09.759 00:06:09.760 real 0m4.231s 00:06:09.760 user 0m4.217s 00:06:09.760 sys 0m0.716s 00:06:09.760 05:58:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.760 05:58:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.760 ************************************ 00:06:09.760 END TEST default_locks 00:06:09.760 ************************************ 00:06:09.760 05:58:20 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:09.760 05:58:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.760 05:58:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.760 05:58:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.760 ************************************ 00:06:09.760 START TEST default_locks_via_rpc 00:06:09.760 ************************************ 00:06:09.760 05:58:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:09.760 05:58:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=17077 00:06:09.760 05:58:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.760 05:58:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 17077 00:06:09.760 05:58:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 17077 ']' 00:06:09.760 05:58:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.760 05:58:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.760 05:58:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.760 05:58:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.760 05:58:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.760 [2024-07-26 05:58:20.985594] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:09.760 [2024-07-26 05:58:20.985738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid17077 ] 00:06:09.760 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.017 [2024-07-26 05:58:21.115280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.274 [2024-07-26 05:58:21.375415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.209 05:58:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.209 05:58:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:11.209 05:58:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:11.209 05:58:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.209 05:58:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.209 05:58:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.209 05:58:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:11.209 05:58:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:11.209 05:58:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:11.209 05:58:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:11.209 05:58:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:11.209 05:58:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.209 05:58:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.209 05:58:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.209 05:58:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 17077 00:06:11.209 05:58:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 17077 00:06:11.209 05:58:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.467 05:58:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 17077 00:06:11.467 05:58:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 17077 ']' 00:06:11.467 05:58:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 17077 00:06:11.467 05:58:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:11.467 05:58:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.467 05:58:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 17077 00:06:11.467 05:58:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.467 05:58:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.468 05:58:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 17077' 00:06:11.468 killing process with pid 17077 00:06:11.468 05:58:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 17077 00:06:11.468 05:58:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 17077 00:06:14.038 00:06:14.038 real 0m4.270s 00:06:14.038 user 0m4.205s 00:06:14.038 sys 0m0.770s 00:06:14.038 05:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.038 05:58:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.038 ************************************ 00:06:14.038 END TEST default_locks_via_rpc 00:06:14.038 ************************************ 00:06:14.038 05:58:25 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:14.038 05:58:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.038 05:58:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.038 05:58:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.038 ************************************ 00:06:14.038 START TEST non_locking_app_on_locked_coremask 00:06:14.038 ************************************ 00:06:14.038 05:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:14.038 05:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=17637 00:06:14.038 05:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.038 05:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 17637 /var/tmp/spdk.sock 00:06:14.038 05:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 17637 ']' 00:06:14.038 05:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.038 05:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.038 05:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.038 05:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.038 05:58:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.038 [2024-07-26 05:58:25.306393] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:14.038 [2024-07-26 05:58:25.306568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid17637 ] 00:06:14.297 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.297 [2024-07-26 05:58:25.433454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.555 [2024-07-26 05:58:25.687119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.490 05:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.490 05:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:15.490 05:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=17781 00:06:15.490 05:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 17781 /var/tmp/spdk2.sock 00:06:15.490 05:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 17781 ']' 00:06:15.490 05:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.490 05:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.490 05:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:15.490 05:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.490 05:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.490 05:58:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.490 [2024-07-26 05:58:26.652456] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:15.491 [2024-07-26 05:58:26.652598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid17781 ] 00:06:15.491 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.749 [2024-07-26 05:58:26.841377] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.749 [2024-07-26 05:58:26.841437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.316 [2024-07-26 05:58:27.363934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.217 05:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.217 05:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:18.217 05:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 17637 00:06:18.217 05:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 17637 00:06:18.217 05:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.783 lslocks: write error 00:06:18.783 05:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 17637 00:06:18.783 05:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 17637 ']' 00:06:18.783 05:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 17637 00:06:18.783 05:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:18.783 05:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.783 05:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 17637 00:06:18.783 05:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:18.783 05:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:18.783 05:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 17637' 00:06:18.783 killing process with pid 17637 00:06:18.783 05:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 17637 00:06:18.783 05:58:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 17637 00:06:24.048 05:58:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 17781 00:06:24.048 05:58:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 17781 ']' 00:06:24.048 05:58:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 17781 00:06:24.048 05:58:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:24.048 05:58:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.048 05:58:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 17781 00:06:24.048 05:58:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.048 05:58:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.048 05:58:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 17781' 00:06:24.048 killing process with pid 17781 00:06:24.048 05:58:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 17781 00:06:24.048 05:58:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 17781 00:06:26.577 00:06:26.577 real 0m12.329s 00:06:26.577 user 0m12.655s 00:06:26.577 sys 0m1.513s 00:06:26.577 05:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.577 05:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.577 ************************************ 00:06:26.577 END TEST non_locking_app_on_locked_coremask 00:06:26.577 ************************************ 00:06:26.577 05:58:37 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:26.577 05:58:37 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.577 05:58:37 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.577 05:58:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.577 ************************************ 00:06:26.577 START TEST locking_app_on_unlocked_coremask 00:06:26.577 ************************************ 00:06:26.577 05:58:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:26.577 05:58:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=19129 00:06:26.577 05:58:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:26.577 05:58:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 19129 /var/tmp/spdk.sock 00:06:26.577 05:58:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 19129 ']' 00:06:26.577 05:58:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.577 05:58:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.577 05:58:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.577 05:58:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.577 05:58:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.577 [2024-07-26 05:58:37.696022] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:26.577 [2024-07-26 05:58:37.696224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid19129 ] 00:06:26.577 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.577 [2024-07-26 05:58:37.819499] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:26.577 [2024-07-26 05:58:37.819555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.836 [2024-07-26 05:58:38.073553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.770 05:58:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.770 05:58:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:27.770 05:58:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=19274 00:06:27.770 05:58:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:27.770 05:58:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 19274 /var/tmp/spdk2.sock 00:06:27.770 05:58:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 19274 ']' 00:06:27.770 05:58:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.770 05:58:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.770 05:58:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.770 05:58:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.770 05:58:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.770 [2024-07-26 05:58:39.070467] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:27.770 [2024-07-26 05:58:39.070625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid19274 ] 00:06:28.028 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.028 [2024-07-26 05:58:39.260194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.594 [2024-07-26 05:58:39.782103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.496 05:58:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.496 05:58:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:30.496 05:58:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 19274 00:06:30.496 05:58:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 19274 00:06:30.496 05:58:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.063 lslocks: write error 00:06:31.064 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 19129 00:06:31.064 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 19129 ']' 00:06:31.064 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 19129 00:06:31.064 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:31.064 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.064 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 19129 00:06:31.064 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:31.064 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:31.064 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 19129' 00:06:31.064 killing process with pid 19129 00:06:31.064 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 19129 00:06:31.064 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 19129 00:06:36.404 05:58:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 19274 00:06:36.404 05:58:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 19274 ']' 00:06:36.404 05:58:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 19274 00:06:36.404 05:58:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:36.404 05:58:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.404 05:58:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 19274 00:06:36.404 05:58:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.404 05:58:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.404 05:58:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 19274' 00:06:36.404 killing process with pid 19274 00:06:36.404 05:58:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 19274 00:06:36.404 05:58:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 19274 00:06:38.940 00:06:38.940 real 0m12.361s 00:06:38.940 user 0m12.747s 00:06:38.940 sys 0m1.546s 00:06:38.940 05:58:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.940 05:58:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.940 ************************************ 00:06:38.940 END TEST locking_app_on_unlocked_coremask 00:06:38.940 ************************************ 00:06:38.940 05:58:49 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:38.940 05:58:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.940 05:58:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.940 05:58:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.940 ************************************ 00:06:38.940 START TEST locking_app_on_locked_coremask 00:06:38.940 ************************************ 00:06:38.940 05:58:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:38.940 05:58:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=20569 00:06:38.940 05:58:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.940 05:58:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 20569 /var/tmp/spdk.sock 00:06:38.940 05:58:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 20569 ']' 00:06:38.940 05:58:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.940 05:58:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.940 05:58:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.940 05:58:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.940 05:58:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.940 [2024-07-26 05:58:50.095501] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:38.940 [2024-07-26 05:58:50.095655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid20569 ] 00:06:38.940 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.940 [2024-07-26 05:58:50.220701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.198 [2024-07-26 05:58:50.480040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.133 05:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.133 05:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:40.133 05:58:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=20773 00:06:40.133 05:58:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:40.133 05:58:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 20773 /var/tmp/spdk2.sock 00:06:40.133 05:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:40.133 05:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 20773 /var/tmp/spdk2.sock 00:06:40.133 05:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:40.133 05:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.133 05:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:40.133 05:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.133 05:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 20773 /var/tmp/spdk2.sock 00:06:40.133 05:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 20773 ']' 00:06:40.133 05:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.133 05:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.133 05:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.133 05:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.133 05:58:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.133 [2024-07-26 05:58:51.454968] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:40.133 [2024-07-26 05:58:51.455147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid20773 ] 00:06:40.391 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.391 [2024-07-26 05:58:51.641296] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 20569 has claimed it. 00:06:40.391 [2024-07-26 05:58:51.641387] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:40.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (20773) - No such process 00:06:40.960 ERROR: process (pid: 20773) is no longer running 00:06:40.960 05:58:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.960 05:58:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:40.960 05:58:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:40.960 05:58:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:40.960 05:58:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:40.960 05:58:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:40.960 05:58:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 20569 00:06:40.960 05:58:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 20569 00:06:40.960 05:58:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.526 lslocks: write error 00:06:41.526 05:58:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 20569 00:06:41.526 05:58:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 20569 ']' 00:06:41.526 05:58:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 20569 00:06:41.526 05:58:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:41.526 05:58:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.526 05:58:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 20569 00:06:41.526 05:58:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.526 05:58:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.526 05:58:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 20569' 00:06:41.526 killing process with pid 20569 00:06:41.526 05:58:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 20569 00:06:41.526 05:58:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 20569 00:06:44.063 00:06:44.063 real 0m5.116s 00:06:44.063 user 0m5.373s 00:06:44.063 sys 0m0.943s 00:06:44.063 05:58:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.063 05:58:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.063 ************************************ 00:06:44.063 END TEST locking_app_on_locked_coremask 00:06:44.063 ************************************ 00:06:44.063 05:58:55 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:44.063 05:58:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.063 05:58:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.063 05:58:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.063 ************************************ 00:06:44.063 START TEST locking_overlapped_coremask 00:06:44.063 ************************************ 00:06:44.063 05:58:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:44.063 05:58:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=21213 00:06:44.063 05:58:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:44.063 05:58:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 21213 /var/tmp/spdk.sock 00:06:44.063 05:58:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 21213 ']' 00:06:44.063 05:58:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.063 05:58:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.063 05:58:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.063 05:58:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.063 05:58:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.063 [2024-07-26 05:58:55.272999] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:44.063 [2024-07-26 05:58:55.273193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid21213 ] 00:06:44.063 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.321 [2024-07-26 05:58:55.399781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.580 [2024-07-26 05:58:55.658281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.580 [2024-07-26 05:58:55.658304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.580 [2024-07-26 05:58:55.658316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.518 05:58:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.518 05:58:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:45.518 05:58:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=21353 00:06:45.518 05:58:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 21353 /var/tmp/spdk2.sock 00:06:45.518 05:58:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:45.518 05:58:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 21353 /var/tmp/spdk2.sock 00:06:45.518 05:58:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:45.518 05:58:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:45.518 05:58:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.518 05:58:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:45.518 05:58:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.518 05:58:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 21353 /var/tmp/spdk2.sock 00:06:45.518 05:58:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 21353 ']' 00:06:45.519 05:58:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.519 05:58:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.519 05:58:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.519 05:58:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.519 05:58:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.519 [2024-07-26 05:58:56.654621] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:45.519 [2024-07-26 05:58:56.654764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid21353 ] 00:06:45.519 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.519 [2024-07-26 05:58:56.837027] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 21213 has claimed it. 00:06:45.519 [2024-07-26 05:58:56.837166] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:46.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (21353) - No such process 00:06:46.088 ERROR: process (pid: 21353) is no longer running 00:06:46.088 05:58:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.088 05:58:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:46.088 05:58:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:46.088 05:58:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:46.088 05:58:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:46.088 05:58:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:46.088 05:58:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:46.088 05:58:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:46.088 05:58:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:46.088 05:58:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:46.088 05:58:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 21213 00:06:46.088 05:58:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 21213 ']' 00:06:46.088 05:58:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 21213 00:06:46.088 05:58:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:46.088 05:58:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:46.088 05:58:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 21213 00:06:46.088 05:58:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:46.088 05:58:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:46.088 05:58:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 21213' 00:06:46.088 killing process with pid 21213 00:06:46.088 05:58:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 21213 00:06:46.088 05:58:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 21213 00:06:48.627 00:06:48.628 real 0m4.436s 00:06:48.628 user 0m11.509s 00:06:48.628 sys 0m0.788s 00:06:48.628 05:58:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.628 05:58:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.628 ************************************ 00:06:48.628 END TEST locking_overlapped_coremask 00:06:48.628 ************************************ 00:06:48.628 05:58:59 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:48.628 05:58:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.628 05:58:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.628 05:58:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.628 ************************************ 00:06:48.628 START TEST locking_overlapped_coremask_via_rpc 00:06:48.628 ************************************ 00:06:48.628 05:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:48.628 05:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=21776 00:06:48.628 05:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:48.628 05:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 21776 /var/tmp/spdk.sock 00:06:48.628 05:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 21776 ']' 00:06:48.628 05:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.628 05:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.628 05:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.628 05:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.628 05:58:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.628 [2024-07-26 05:58:59.752349] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:48.628 [2024-07-26 05:58:59.752486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid21776 ] 00:06:48.628 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.628 [2024-07-26 05:58:59.882642] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.628 [2024-07-26 05:58:59.882704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.887 [2024-07-26 05:59:00.148439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.887 [2024-07-26 05:59:00.148463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.887 [2024-07-26 05:59:00.148469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.823 05:59:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.823 05:59:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:49.823 05:59:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=21927 00:06:49.823 05:59:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:49.823 05:59:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 21927 /var/tmp/spdk2.sock 00:06:49.823 05:59:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 21927 ']' 00:06:49.823 05:59:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.823 05:59:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.823 05:59:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.823 05:59:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.823 05:59:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.823 [2024-07-26 05:59:01.138086] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:49.823 [2024-07-26 05:59:01.138229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid21927 ] 00:06:50.083 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.083 [2024-07-26 05:59:01.326416] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.083 [2024-07-26 05:59:01.326491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.650 [2024-07-26 05:59:01.854500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.650 [2024-07-26 05:59:01.858126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.650 [2024-07-26 05:59:01.858131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:52.556 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.556 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:52.556 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:52.556 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.556 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.556 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.556 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.556 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:52.556 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.816 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:52.816 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.816 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:52.816 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.816 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.816 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.816 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.816 [2024-07-26 05:59:03.899250] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 21776 has claimed it. 00:06:52.816 request: 00:06:52.816 { 00:06:52.816 "method": "framework_enable_cpumask_locks", 00:06:52.816 "req_id": 1 00:06:52.816 } 00:06:52.816 Got JSON-RPC error response 00:06:52.816 response: 00:06:52.816 { 00:06:52.816 "code": -32603, 00:06:52.816 "message": "Failed to claim CPU core: 2" 00:06:52.816 } 00:06:52.816 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:52.816 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:52.816 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.816 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:52.816 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.816 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 21776 /var/tmp/spdk.sock 00:06:52.816 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 21776 ']' 00:06:52.816 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.816 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.816 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.816 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.816 05:59:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.075 05:59:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.075 05:59:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:53.075 05:59:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 21927 /var/tmp/spdk2.sock 00:06:53.075 05:59:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 21927 ']' 00:06:53.075 05:59:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.075 05:59:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.075 05:59:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.075 05:59:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.075 05:59:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.075 05:59:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.075 05:59:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:53.075 05:59:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:53.075 05:59:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:53.075 05:59:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:53.075 05:59:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:53.075 00:06:53.075 real 0m4.744s 00:06:53.075 user 0m1.528s 00:06:53.075 sys 0m0.265s 00:06:53.075 05:59:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.075 05:59:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.075 ************************************ 00:06:53.075 END TEST locking_overlapped_coremask_via_rpc 00:06:53.075 ************************************ 00:06:53.334 05:59:04 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:53.334 05:59:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 21776 ]] 00:06:53.334 05:59:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 21776 00:06:53.334 05:59:04 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 21776 ']' 00:06:53.334 05:59:04 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 21776 00:06:53.334 05:59:04 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:53.334 05:59:04 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.334 05:59:04 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 21776 00:06:53.334 05:59:04 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.334 05:59:04 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.334 05:59:04 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 21776' 00:06:53.334 killing process with pid 21776 00:06:53.334 05:59:04 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 21776 00:06:53.334 05:59:04 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 21776 00:06:55.898 05:59:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 21927 ]] 00:06:55.898 05:59:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 21927 00:06:55.898 05:59:06 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 21927 ']' 00:06:55.898 05:59:06 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 21927 00:06:55.898 05:59:06 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:55.898 05:59:06 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:55.898 05:59:06 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 21927 00:06:55.898 05:59:06 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:55.898 05:59:06 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:55.898 05:59:06 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 21927' 00:06:55.898 killing process with pid 21927 00:06:55.898 05:59:06 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 21927 00:06:55.898 05:59:06 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 21927 00:06:57.804 05:59:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:57.804 05:59:08 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:57.804 05:59:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 21776 ]] 00:06:57.804 05:59:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 21776 00:06:57.804 05:59:08 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 21776 ']' 00:06:57.804 05:59:08 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 21776 00:06:57.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (21776) - No such process 00:06:57.804 05:59:08 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 21776 is not found' 00:06:57.804 Process with pid 21776 is not found 00:06:57.804 05:59:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 21927 ]] 00:06:57.804 05:59:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 21927 00:06:57.804 05:59:08 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 21927 ']' 00:06:57.804 05:59:08 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 21927 00:06:57.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (21927) - No such process 00:06:57.804 05:59:08 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 21927 is not found' 00:06:57.804 Process with pid 21927 is not found 00:06:57.804 05:59:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:57.804 00:06:57.804 real 0m52.445s 00:06:57.804 user 1m26.912s 00:06:57.804 sys 0m7.761s 00:06:57.804 05:59:08 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.804 05:59:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.804 ************************************ 00:06:57.804 END TEST cpu_locks 00:06:57.804 ************************************ 00:06:57.804 00:06:57.804 real 1m22.689s 00:06:57.805 user 2m24.367s 00:06:57.805 sys 0m12.304s 00:06:57.805 05:59:08 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.805 05:59:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:57.805 ************************************ 00:06:57.805 END TEST event 00:06:57.805 ************************************ 00:06:57.805 05:59:09 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:57.805 05:59:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.805 05:59:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.805 05:59:09 -- common/autotest_common.sh@10 -- # set +x 00:06:57.805 ************************************ 00:06:57.805 START TEST thread 00:06:57.805 ************************************ 00:06:57.805 05:59:09 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:57.805 * Looking for test storage... 00:06:57.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:57.805 05:59:09 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:57.805 05:59:09 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:57.805 05:59:09 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.805 05:59:09 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.805 ************************************ 00:06:57.805 START TEST thread_poller_perf 00:06:57.805 ************************************ 00:06:57.805 05:59:09 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:58.065 [2024-07-26 05:59:09.154003] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:58.065 [2024-07-26 05:59:09.154147] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid22959 ] 00:06:58.065 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.065 [2024-07-26 05:59:09.296590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.325 [2024-07-26 05:59:09.550949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.325 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:59.704 ====================================== 00:06:59.704 busy:2716941266 (cyc) 00:06:59.704 total_run_count: 281000 00:06:59.704 tsc_hz: 2700000000 (cyc) 00:06:59.704 ====================================== 00:06:59.704 poller_cost: 9668 (cyc), 3580 (nsec) 00:06:59.704 00:06:59.704 real 0m1.895s 00:06:59.704 user 0m1.720s 00:06:59.704 sys 0m0.165s 00:06:59.704 05:59:11 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.704 05:59:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:59.704 ************************************ 00:06:59.704 END TEST thread_poller_perf 00:06:59.704 ************************************ 00:06:59.704 05:59:11 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.704 05:59:11 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:59.704 05:59:11 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.704 05:59:11 thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.962 ************************************ 00:06:59.962 START TEST thread_poller_perf 00:06:59.962 ************************************ 00:06:59.962 05:59:11 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.962 [2024-07-26 05:59:11.098264] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:59.962 [2024-07-26 05:59:11.098395] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid23239 ] 00:06:59.962 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.962 [2024-07-26 05:59:11.219894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.220 [2024-07-26 05:59:11.470592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.220 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:01.857 ====================================== 00:07:01.857 busy:2705062851 (cyc) 00:07:01.857 total_run_count: 3791000 00:07:01.857 tsc_hz: 2700000000 (cyc) 00:07:01.857 ====================================== 00:07:01.857 poller_cost: 713 (cyc), 264 (nsec) 00:07:01.857 00:07:01.857 real 0m1.860s 00:07:01.857 user 0m1.698s 00:07:01.857 sys 0m0.151s 00:07:01.857 05:59:12 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.857 05:59:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.857 ************************************ 00:07:01.857 END TEST thread_poller_perf 00:07:01.857 ************************************ 00:07:01.857 05:59:12 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:01.857 00:07:01.857 real 0m3.901s 00:07:01.857 user 0m3.470s 00:07:01.857 sys 0m0.420s 00:07:01.857 05:59:12 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.857 05:59:12 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.857 ************************************ 00:07:01.857 END TEST thread 00:07:01.857 ************************************ 00:07:01.857 05:59:12 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:07:01.857 05:59:12 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:01.857 05:59:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:01.857 05:59:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.857 05:59:12 -- common/autotest_common.sh@10 -- # set +x 00:07:01.858 ************************************ 00:07:01.858 START TEST app_cmdline 00:07:01.858 ************************************ 00:07:01.858 05:59:12 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:01.858 * Looking for test storage... 00:07:01.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:01.858 05:59:13 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:01.858 05:59:13 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=23554 00:07:01.858 05:59:13 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:01.858 05:59:13 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 23554 00:07:01.858 05:59:13 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 23554 ']' 00:07:01.858 05:59:13 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.858 05:59:13 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.858 05:59:13 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.858 05:59:13 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.858 05:59:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.858 [2024-07-26 05:59:13.134438] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:01.858 [2024-07-26 05:59:13.134583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid23554 ] 00:07:02.116 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.116 [2024-07-26 05:59:13.256650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.375 [2024-07-26 05:59:13.515738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.311 05:59:14 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.311 05:59:14 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:03.311 05:59:14 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:03.570 { 00:07:03.570 "version": "SPDK v24.09-pre git sha1 704257090", 00:07:03.570 "fields": { 00:07:03.570 "major": 24, 00:07:03.570 "minor": 9, 00:07:03.570 "patch": 0, 00:07:03.570 "suffix": "-pre", 00:07:03.570 "commit": "704257090" 00:07:03.570 } 00:07:03.570 } 00:07:03.570 05:59:14 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:03.570 05:59:14 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:03.570 05:59:14 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:03.570 05:59:14 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:03.570 05:59:14 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:03.570 05:59:14 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:03.570 05:59:14 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.570 05:59:14 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:03.570 05:59:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:03.570 05:59:14 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.570 05:59:14 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:03.570 05:59:14 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:03.570 05:59:14 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:03.570 05:59:14 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:03.570 05:59:14 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:03.570 05:59:14 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:03.570 05:59:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.570 05:59:14 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:03.570 05:59:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.570 05:59:14 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:03.570 05:59:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.570 05:59:14 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:03.570 05:59:14 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:03.570 05:59:14 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:03.830 request: 00:07:03.830 { 00:07:03.830 "method": "env_dpdk_get_mem_stats", 00:07:03.830 "req_id": 1 00:07:03.830 } 00:07:03.830 Got JSON-RPC error response 00:07:03.830 response: 00:07:03.830 { 00:07:03.830 "code": -32601, 00:07:03.830 "message": "Method not found" 00:07:03.830 } 00:07:03.830 05:59:14 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:03.830 05:59:14 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.830 05:59:14 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.830 05:59:14 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.830 05:59:14 app_cmdline -- app/cmdline.sh@1 -- # killprocess 23554 00:07:03.830 05:59:14 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 23554 ']' 00:07:03.830 05:59:14 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 23554 00:07:03.830 05:59:14 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:03.830 05:59:14 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:03.830 05:59:14 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 23554 00:07:03.830 05:59:15 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:03.830 05:59:15 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:03.830 05:59:15 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 23554' 00:07:03.830 killing process with pid 23554 00:07:03.830 05:59:15 app_cmdline -- common/autotest_common.sh@969 -- # kill 23554 00:07:03.830 05:59:15 app_cmdline -- common/autotest_common.sh@974 -- # wait 23554 00:07:06.368 00:07:06.368 real 0m4.541s 00:07:06.368 user 0m4.913s 00:07:06.368 sys 0m0.678s 00:07:06.368 05:59:17 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.368 05:59:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:06.368 ************************************ 00:07:06.368 END TEST app_cmdline 00:07:06.368 ************************************ 00:07:06.368 05:59:17 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:06.368 05:59:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.368 05:59:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.368 05:59:17 -- common/autotest_common.sh@10 -- # set +x 00:07:06.368 ************************************ 00:07:06.368 START TEST version 00:07:06.368 ************************************ 00:07:06.368 05:59:17 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:06.368 * Looking for test storage... 00:07:06.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:06.368 05:59:17 version -- app/version.sh@17 -- # get_header_version major 00:07:06.368 05:59:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:06.368 05:59:17 version -- app/version.sh@14 -- # cut -f2 00:07:06.368 05:59:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:06.368 05:59:17 version -- app/version.sh@17 -- # major=24 00:07:06.368 05:59:17 version -- app/version.sh@18 -- # get_header_version minor 00:07:06.368 05:59:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:06.368 05:59:17 version -- app/version.sh@14 -- # cut -f2 00:07:06.368 05:59:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:06.368 05:59:17 version -- app/version.sh@18 -- # minor=9 00:07:06.368 05:59:17 version -- app/version.sh@19 -- # get_header_version patch 00:07:06.368 05:59:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:06.368 05:59:17 version -- app/version.sh@14 -- # cut -f2 00:07:06.368 05:59:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:06.368 05:59:17 version -- app/version.sh@19 -- # patch=0 00:07:06.368 05:59:17 version -- app/version.sh@20 -- # get_header_version suffix 00:07:06.368 05:59:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:06.368 05:59:17 version -- app/version.sh@14 -- # cut -f2 00:07:06.368 05:59:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:06.368 05:59:17 version -- app/version.sh@20 -- # suffix=-pre 00:07:06.369 05:59:17 version -- app/version.sh@22 -- # version=24.9 00:07:06.369 05:59:17 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:06.369 05:59:17 version -- app/version.sh@28 -- # version=24.9rc0 00:07:06.369 05:59:17 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:06.369 05:59:17 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:06.369 05:59:17 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:06.369 05:59:17 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:06.369 00:07:06.369 real 0m0.108s 00:07:06.369 user 0m0.059s 00:07:06.369 sys 0m0.071s 00:07:06.369 05:59:17 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.369 05:59:17 version -- common/autotest_common.sh@10 -- # set +x 00:07:06.369 ************************************ 00:07:06.369 END TEST version 00:07:06.369 ************************************ 00:07:06.628 05:59:17 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:07:06.628 05:59:17 -- spdk/autotest.sh@202 -- # uname -s 00:07:06.628 05:59:17 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:07:06.628 05:59:17 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:06.628 05:59:17 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:06.628 05:59:17 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:07:06.628 05:59:17 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:06.628 05:59:17 -- spdk/autotest.sh@264 -- # timing_exit lib 00:07:06.628 05:59:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:06.628 05:59:17 -- common/autotest_common.sh@10 -- # set +x 00:07:06.628 05:59:17 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:06.628 05:59:17 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:07:06.628 05:59:17 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:07:06.628 05:59:17 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:07:06.628 05:59:17 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:07:06.628 05:59:17 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:07:06.628 05:59:17 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:06.628 05:59:17 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:06.628 05:59:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.628 05:59:17 -- common/autotest_common.sh@10 -- # set +x 00:07:06.628 ************************************ 00:07:06.628 START TEST nvmf_tcp 00:07:06.628 ************************************ 00:07:06.628 05:59:17 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:06.628 * Looking for test storage... 00:07:06.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:06.628 05:59:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:06.628 05:59:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:06.628 05:59:17 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:06.628 05:59:17 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:06.628 05:59:17 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.628 05:59:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:06.628 ************************************ 00:07:06.628 START TEST nvmf_target_core 00:07:06.628 ************************************ 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:06.628 * Looking for test storage... 00:07:06.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:06.628 ************************************ 00:07:06.628 START TEST nvmf_abort 00:07:06.628 ************************************ 00:07:06.628 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:06.628 * Looking for test storage... 00:07:06.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.887 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.888 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.888 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:06.888 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:06.888 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:06.888 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:06.888 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:06.888 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:06.888 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:06.888 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:06.888 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:06.888 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:06.888 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:06.888 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.888 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:06.888 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.888 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:06.888 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:06.888 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:06.888 05:59:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:08.787 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:08.787 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:08.787 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:08.787 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:08.787 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:08.787 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:08.787 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:08.787 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:08.787 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:08.787 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:08.787 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:08.787 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:08.787 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:08.788 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:08.788 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:08.788 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:08.788 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:08.788 05:59:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:08.788 05:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:08.788 05:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:08.788 05:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:08.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:08.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:07:08.788 00:07:08.788 --- 10.0.0.2 ping statistics --- 00:07:08.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.788 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:07:08.788 05:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:08.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:08.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:07:08.788 00:07:08.788 --- 10.0.0.1 ping statistics --- 00:07:08.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.788 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:07:08.789 05:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:08.789 05:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:08.789 05:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:08.789 05:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:08.789 05:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:08.789 05:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:08.789 05:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:08.789 05:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:08.789 05:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:08.789 05:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:08.789 05:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:08.789 05:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:08.789 05:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:08.789 05:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=25875 00:07:08.789 05:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 25875 00:07:08.789 05:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:08.789 05:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 25875 ']' 00:07:08.789 05:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.789 05:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.789 05:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.789 05:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.789 05:59:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.047 [2024-07-26 05:59:20.170967] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:09.047 [2024-07-26 05:59:20.171139] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.047 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.047 [2024-07-26 05:59:20.323622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.306 [2024-07-26 05:59:20.592234] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:09.306 [2024-07-26 05:59:20.592315] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:09.306 [2024-07-26 05:59:20.592348] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:09.306 [2024-07-26 05:59:20.592369] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:09.306 [2024-07-26 05:59:20.592391] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:09.306 [2024-07-26 05:59:20.592531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.306 [2024-07-26 05:59:20.592595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.306 [2024-07-26 05:59:20.592604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.874 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.874 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:09.874 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:09.874 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:09.874 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.874 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.874 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:09.874 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.874 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.874 [2024-07-26 05:59:21.092259] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.874 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.874 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:09.874 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.874 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.874 Malloc0 00:07:09.874 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.874 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:09.874 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.874 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.874 Delay0 00:07:09.874 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.874 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:09.874 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.874 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.134 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.134 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:10.134 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.134 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.134 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.134 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:10.134 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.134 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.134 [2024-07-26 05:59:21.224582] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:10.134 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.134 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:10.134 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.134 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.134 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.134 05:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:10.134 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.134 [2024-07-26 05:59:21.433200] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:12.666 Initializing NVMe Controllers 00:07:12.666 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:12.666 controller IO queue size 128 less than required 00:07:12.666 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:12.666 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:12.666 Initialization complete. Launching workers. 00:07:12.666 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 25988 00:07:12.666 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 26045, failed to submit 66 00:07:12.666 success 25988, unsuccess 57, failed 0 00:07:12.666 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:12.666 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.666 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.666 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.666 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:12.666 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:12.666 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:12.666 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:12.666 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:12.666 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:12.666 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:12.666 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:12.666 rmmod nvme_tcp 00:07:12.666 rmmod nvme_fabrics 00:07:12.666 rmmod nvme_keyring 00:07:12.666 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:12.666 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:12.666 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:12.666 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 25875 ']' 00:07:12.666 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 25875 00:07:12.666 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 25875 ']' 00:07:12.666 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 25875 00:07:12.666 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:12.666 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.666 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 25875 00:07:12.666 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:12.666 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:12.666 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 25875' 00:07:12.666 killing process with pid 25875 00:07:12.666 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 25875 00:07:12.667 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 25875 00:07:14.073 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:14.073 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:14.073 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:14.073 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:14.073 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:14.073 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.073 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:14.073 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:15.988 00:07:15.988 real 0m9.139s 00:07:15.988 user 0m14.752s 00:07:15.988 sys 0m2.714s 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.988 ************************************ 00:07:15.988 END TEST nvmf_abort 00:07:15.988 ************************************ 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:15.988 ************************************ 00:07:15.988 START TEST nvmf_ns_hotplug_stress 00:07:15.988 ************************************ 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:15.988 * Looking for test storage... 00:07:15.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.988 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.989 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.989 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:15.989 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.989 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:15.989 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:15.989 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:15.989 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:15.989 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.989 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.989 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:15.989 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:15.989 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:15.989 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:15.989 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:15.989 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:15.989 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:15.989 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:15.989 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:15.989 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:15.989 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.989 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.989 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.989 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:15.989 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:15.989 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:15.989 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:17.896 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:17.896 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:17.896 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:17.896 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:17.896 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:17.897 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:17.897 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:17.897 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:17.897 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:17.897 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:17.897 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:17.897 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:17.897 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:17.897 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:17.897 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:17.897 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:17.897 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:17.897 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:17.897 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:18.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:18.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:07:18.156 00:07:18.156 --- 10.0.0.2 ping statistics --- 00:07:18.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.156 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:18.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:18.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:07:18.156 00:07:18.156 --- 10.0.0.1 ping statistics --- 00:07:18.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.156 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=28378 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 28378 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 28378 ']' 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.156 05:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:18.156 [2024-07-26 05:59:29.444470] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:18.156 [2024-07-26 05:59:29.444629] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.416 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.416 [2024-07-26 05:59:29.587520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.676 [2024-07-26 05:59:29.852420] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.676 [2024-07-26 05:59:29.852511] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.676 [2024-07-26 05:59:29.852545] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:18.676 [2024-07-26 05:59:29.852567] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:18.676 [2024-07-26 05:59:29.852590] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.676 [2024-07-26 05:59:29.852750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.676 [2024-07-26 05:59:29.852807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.676 [2024-07-26 05:59:29.852818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.243 05:59:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.243 05:59:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:19.243 05:59:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:19.243 05:59:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:19.243 05:59:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:19.243 05:59:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:19.243 05:59:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:19.243 05:59:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:19.501 [2024-07-26 05:59:30.643954] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.501 05:59:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:19.759 05:59:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:20.016 [2024-07-26 05:59:31.170428] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.016 05:59:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:20.283 05:59:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:20.544 Malloc0 00:07:20.544 05:59:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:20.802 Delay0 00:07:20.802 05:59:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.060 05:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:21.318 NULL1 00:07:21.318 05:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:21.576 05:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=28803 00:07:21.576 05:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:21.576 05:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:21.576 05:59:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.576 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.958 Read completed with error (sct=0, sc=11) 00:07:22.958 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:22.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:22.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:22.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:22.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:22.958 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:22.958 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:23.216 true 00:07:23.216 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:23.216 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.153 05:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.411 05:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:24.411 05:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:24.670 true 00:07:24.670 05:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:24.670 05:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.929 05:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.929 05:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:24.929 05:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:25.186 true 00:07:25.446 05:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:25.446 05:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.015 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.015 05:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.272 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.272 05:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:26.272 05:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:26.530 true 00:07:26.530 05:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:26.530 05:59:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.788 05:59:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.046 05:59:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:27.046 05:59:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:27.304 true 00:07:27.304 05:59:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:27.304 05:59:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.240 05:59:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.499 05:59:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:28.499 05:59:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:28.757 true 00:07:28.757 05:59:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:28.757 05:59:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.016 05:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.274 05:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:29.274 05:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:29.535 true 00:07:29.535 05:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:29.535 05:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.830 05:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.088 05:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:30.088 05:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:30.346 true 00:07:30.346 05:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:30.346 05:59:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.283 05:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.541 05:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:31.541 05:59:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:31.801 true 00:07:32.060 05:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:32.060 05:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.317 05:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.574 05:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:32.574 05:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:32.832 true 00:07:32.832 05:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:32.832 05:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.090 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.349 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:33.349 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:33.349 true 00:07:33.608 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:33.609 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.542 05:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.799 05:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:34.799 05:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:35.057 true 00:07:35.057 05:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:35.057 05:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.315 05:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.573 05:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:35.573 05:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:35.830 true 00:07:35.830 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:35.830 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.762 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.762 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.020 05:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:37.020 05:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:37.277 true 00:07:37.277 05:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:37.277 05:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.534 05:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.792 05:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:37.792 05:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:38.049 true 00:07:38.049 05:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:38.049 05:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.983 05:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.240 05:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:39.240 05:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:39.498 true 00:07:39.498 05:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:39.498 05:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.755 05:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.013 05:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:40.013 05:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:40.013 true 00:07:40.013 05:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:40.013 05:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.271 05:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.528 05:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:40.528 05:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:40.786 true 00:07:40.786 05:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:40.786 05:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.159 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.159 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:42.159 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:42.416 true 00:07:42.416 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:42.416 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.378 05:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.378 05:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:43.378 05:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:43.636 true 00:07:43.636 05:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:43.636 05:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.894 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.152 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:44.152 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:44.409 true 00:07:44.409 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:44.409 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.667 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.924 05:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:44.924 05:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:45.182 true 00:07:45.182 05:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:45.182 05:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.115 05:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.681 05:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:46.681 05:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:46.681 true 00:07:46.681 05:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:46.681 05:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.939 05:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.197 05:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:47.197 05:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:47.455 true 00:07:47.455 05:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:47.455 05:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.713 05:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.971 05:59:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:47.971 05:59:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:48.229 true 00:07:48.229 05:59:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:48.229 05:59:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.603 06:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.603 06:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:49.603 06:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:49.861 true 00:07:49.861 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:49.861 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.119 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.377 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:50.377 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:50.635 true 00:07:50.635 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:50.635 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.566 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.566 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.566 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.824 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:51.824 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:51.824 Initializing NVMe Controllers 00:07:51.824 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:51.824 Controller IO queue size 128, less than required. 00:07:51.824 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:51.824 Controller IO queue size 128, less than required. 00:07:51.824 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:51.824 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:51.824 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:51.824 Initialization complete. Launching workers. 00:07:51.824 ======================================================== 00:07:51.824 Latency(us) 00:07:51.824 Device Information : IOPS MiB/s Average min max 00:07:51.824 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 751.76 0.37 83607.90 2999.42 1016968.05 00:07:51.824 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7636.78 3.73 16762.53 5043.41 386771.51 00:07:51.824 ======================================================== 00:07:51.824 Total : 8388.54 4.10 22753.03 2999.42 1016968.05 00:07:51.824 00:07:52.082 true 00:07:52.082 06:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 28803 00:07:52.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (28803) - No such process 00:07:52.082 06:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 28803 00:07:52.082 06:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.339 06:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:52.597 06:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:52.597 06:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:52.597 06:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:52.597 06:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.597 06:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:52.855 null0 00:07:52.855 06:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:52.855 06:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.855 06:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:53.113 null1 00:07:53.113 06:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:53.113 06:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:53.113 06:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:53.388 null2 00:07:53.388 06:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:53.388 06:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:53.388 06:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:53.650 null3 00:07:53.650 06:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:53.650 06:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:53.650 06:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:53.908 null4 00:07:53.908 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:53.908 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:53.908 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:54.166 null5 00:07:54.166 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:54.166 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:54.166 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:54.424 null6 00:07:54.424 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:54.424 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:54.424 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:54.682 null7 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.682 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 32974 32975 32977 32979 32981 32983 32985 32987 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.683 06:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.941 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.941 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:54.941 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:54.941 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.941 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:54.941 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:54.941 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:54.941 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:55.199 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.199 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.199 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:55.199 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.199 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.199 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:55.199 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.199 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.199 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:55.199 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.199 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.199 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:55.199 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.199 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.199 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:55.199 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.199 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.199 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.200 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.200 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:55.200 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:55.200 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.200 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.200 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:55.464 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.464 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.464 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.464 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.464 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:55.464 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:55.464 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:55.464 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:55.723 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.723 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.723 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:55.723 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.723 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.723 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:55.723 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.723 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.723 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:55.723 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.723 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.723 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:55.723 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.723 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.723 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:55.723 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.723 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.723 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.723 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.723 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:55.723 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:55.723 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.723 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.723 06:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:55.982 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.982 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.982 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.982 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.982 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:55.982 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:55.982 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:55.982 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:56.241 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.241 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.241 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:56.241 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.241 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.241 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:56.241 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.241 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.241 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:56.241 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.241 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.241 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:56.242 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.242 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.242 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:56.242 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.242 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.242 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.242 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.242 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:56.242 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:56.242 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.242 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.242 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:56.533 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.533 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:56.533 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:56.533 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:56.533 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:56.533 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:56.533 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:56.533 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:56.792 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.793 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.793 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:56.793 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.793 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.793 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:56.793 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.793 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.793 06:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:56.793 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.793 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.793 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:56.793 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.793 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.793 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:56.793 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.793 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.793 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:56.793 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.793 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.793 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:56.793 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.793 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.793 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:57.050 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:57.050 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.050 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:57.050 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:57.050 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:57.050 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:57.050 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:57.050 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:57.309 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.309 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.309 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:57.309 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.309 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.309 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:57.309 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.309 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.309 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:57.309 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.309 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.309 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:57.309 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.309 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.309 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:57.309 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.309 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.309 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:57.309 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.309 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.309 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:57.309 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.309 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.309 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:57.567 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:57.567 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.568 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:57.568 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:57.568 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:57.568 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:57.568 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:57.568 06:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:57.826 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.826 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.826 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:57.826 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.826 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.826 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:57.826 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.826 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.826 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:57.826 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.826 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.826 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:57.826 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.826 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.826 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:57.826 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.826 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.826 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:57.826 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.826 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.826 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:57.826 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.826 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.826 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:58.084 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:58.084 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.085 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:58.085 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:58.085 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:58.085 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:58.085 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:58.085 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:58.342 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.342 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.342 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:58.342 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.342 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.342 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:58.342 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.342 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.342 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:58.342 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.342 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.342 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:58.342 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.342 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.342 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:58.342 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.342 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.342 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:58.342 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.342 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.342 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:58.342 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.342 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.342 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:58.599 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:58.600 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.600 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:58.858 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:58.858 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:58.858 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:58.858 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:58.858 06:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:59.117 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.117 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.117 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:59.117 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.117 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.117 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:59.117 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.117 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.117 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:59.117 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.117 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.117 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:59.117 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.117 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.117 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:59.117 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.117 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.117 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:59.117 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.117 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.117 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:59.117 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.117 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.117 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:59.391 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:59.391 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:59.391 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:59.391 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.391 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:59.391 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:59.392 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:59.392 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:59.659 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.659 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.659 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:59.659 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.659 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.659 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:59.659 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.659 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.659 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:59.659 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.659 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.659 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:59.659 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.659 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.659 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:59.659 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.659 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.659 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:59.659 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.659 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.659 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:59.659 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.659 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.659 06:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:59.918 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:59.918 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.918 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:59.918 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:59.918 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:59.918 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:59.918 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:59.918 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:00.176 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.176 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.176 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.176 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.176 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.176 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.176 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.176 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.176 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.176 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:00.177 rmmod nvme_tcp 00:08:00.177 rmmod nvme_fabrics 00:08:00.177 rmmod nvme_keyring 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 28378 ']' 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 28378 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 28378 ']' 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 28378 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 28378 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 28378' 00:08:00.177 killing process with pid 28378 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 28378 00:08:00.177 06:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 28378 00:08:01.553 06:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:01.553 06:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:01.553 06:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:01.553 06:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:01.553 06:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:01.553 06:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.553 06:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.553 06:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.461 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:03.720 00:08:03.720 real 0m47.690s 00:08:03.720 user 3m32.895s 00:08:03.720 sys 0m16.489s 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:03.720 ************************************ 00:08:03.720 END TEST nvmf_ns_hotplug_stress 00:08:03.720 ************************************ 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:03.720 ************************************ 00:08:03.720 START TEST nvmf_delete_subsystem 00:08:03.720 ************************************ 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:03.720 * Looking for test storage... 00:08:03.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:03.720 06:00:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:05.623 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:05.623 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:05.623 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:05.624 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:05.624 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:05.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:05.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:08:05.624 00:08:05.624 --- 10.0.0.2 ping statistics --- 00:08:05.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.624 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:05.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:05.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:08:05.624 00:08:05.624 --- 10.0.0.1 ping statistics --- 00:08:05.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.624 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=36373 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 36373 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 36373 ']' 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.624 06:00:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.883 [2024-07-26 06:00:17.014614] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:05.883 [2024-07-26 06:00:17.014743] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.883 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.883 [2024-07-26 06:00:17.146621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:06.142 [2024-07-26 06:00:17.446204] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.142 [2024-07-26 06:00:17.446302] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.142 [2024-07-26 06:00:17.446344] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.142 [2024-07-26 06:00:17.446385] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.142 [2024-07-26 06:00:17.446416] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.142 [2024-07-26 06:00:17.446565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.142 [2024-07-26 06:00:17.446583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.711 06:00:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:06.711 06:00:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:06.711 06:00:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:06.711 06:00:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:06.711 06:00:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:06.711 06:00:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.711 06:00:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:06.711 06:00:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.711 06:00:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:06.711 [2024-07-26 06:00:17.980030] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.711 06:00:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.711 06:00:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:06.711 06:00:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.711 06:00:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:06.711 06:00:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.711 06:00:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:06.711 06:00:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.711 06:00:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:06.711 [2024-07-26 06:00:17.996903] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.711 06:00:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.711 06:00:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:06.711 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.711 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:06.711 NULL1 00:08:06.711 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.711 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:06.711 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.711 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:06.711 Delay0 00:08:06.711 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.711 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.711 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.711 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:06.711 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.711 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=36525 00:08:06.711 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:06.711 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:06.971 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.971 [2024-07-26 06:00:18.122076] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:08.878 06:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:08.878 06:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.878 06:00:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 starting I/O failed: -6 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 starting I/O failed: -6 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 starting I/O failed: -6 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 starting I/O failed: -6 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 starting I/O failed: -6 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 starting I/O failed: -6 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 starting I/O failed: -6 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 starting I/O failed: -6 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 starting I/O failed: -6 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 [2024-07-26 06:00:20.228735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(5) to be set 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 [2024-07-26 06:00:20.229967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(5) to be set 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 starting I/O failed: -6 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 starting I/O failed: -6 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 starting I/O failed: -6 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 starting I/O failed: -6 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 starting I/O failed: -6 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 starting I/O failed: -6 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 starting I/O failed: -6 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 starting I/O failed: -6 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 starting I/O failed: -6 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 starting I/O failed: -6 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 starting I/O failed: -6 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 starting I/O failed: -6 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 starting I/O failed: -6 00:08:09.139 [2024-07-26 06:00:20.231118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(5) to be set 00:08:09.139 Write completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.139 Read completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Write completed with error (sct=0, sc=8) 00:08:09.140 Write completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Write completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Write completed with error (sct=0, sc=8) 00:08:09.140 Write completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Write completed with error (sct=0, sc=8) 00:08:09.140 Write completed with error (sct=0, sc=8) 00:08:09.140 Write completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Write completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Write completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Write completed with error (sct=0, sc=8) 00:08:09.140 Write completed with error (sct=0, sc=8) 00:08:09.140 Read completed with error (sct=0, sc=8) 00:08:09.140 Write completed with error (sct=0, sc=8) 00:08:09.140 [2024-07-26 06:00:20.231686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(5) to be set 00:08:10.077 [2024-07-26 06:00:21.181873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015980 is same with the state(5) to be set 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 [2024-07-26 06:00:21.232899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016100 is same with the state(5) to be set 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 [2024-07-26 06:00:21.233776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016600 is same with the state(5) to be set 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 [2024-07-26 06:00:21.234304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(5) to be set 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Write completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.077 Read completed with error (sct=0, sc=8) 00:08:10.078 Read completed with error (sct=0, sc=8) 00:08:10.078 Write completed with error (sct=0, sc=8) 00:08:10.078 Write completed with error (sct=0, sc=8) 00:08:10.078 Read completed with error (sct=0, sc=8) 00:08:10.078 Write completed with error (sct=0, sc=8) 00:08:10.078 Read completed with error (sct=0, sc=8) 00:08:10.078 Read completed with error (sct=0, sc=8) 00:08:10.078 Write completed with error (sct=0, sc=8) 00:08:10.078 Write completed with error (sct=0, sc=8) 00:08:10.078 Write completed with error (sct=0, sc=8) 00:08:10.078 Read completed with error (sct=0, sc=8) 00:08:10.078 Read completed with error (sct=0, sc=8) 00:08:10.078 Read completed with error (sct=0, sc=8) 00:08:10.078 Read completed with error (sct=0, sc=8) 00:08:10.078 Read completed with error (sct=0, sc=8) 00:08:10.078 Write completed with error (sct=0, sc=8) 00:08:10.078 Write completed with error (sct=0, sc=8) 00:08:10.078 Read completed with error (sct=0, sc=8) 00:08:10.078 Write completed with error (sct=0, sc=8) 00:08:10.078 Read completed with error (sct=0, sc=8) 00:08:10.078 Read completed with error (sct=0, sc=8) 00:08:10.078 [2024-07-26 06:00:21.235841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015e80 is same with the state(5) to be set 00:08:10.078 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.078 Initializing NVMe Controllers 00:08:10.078 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:10.078 Controller IO queue size 128, less than required. 00:08:10.078 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:10.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:10.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:10.078 Initialization complete. Launching workers. 00:08:10.078 ======================================================== 00:08:10.078 Latency(us) 00:08:10.078 Device Information : IOPS MiB/s Average min max 00:08:10.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 181.23 0.09 957741.67 2140.08 1016822.22 00:08:10.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 149.05 0.07 901649.44 1237.24 1015816.33 00:08:10.078 ======================================================== 00:08:10.078 Total : 330.28 0.16 932428.68 1237.24 1016822.22 00:08:10.078 00:08:10.078 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:10.078 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 36525 00:08:10.078 [2024-07-26 06:00:21.240605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015980 (9): Bad file descriptor 00:08:10.078 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:10.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 36525 00:08:10.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (36525) - No such process 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 36525 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 36525 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 36525 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.646 [2024-07-26 06:00:21.761656] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=36933 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 36933 00:08:10.646 06:00:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:10.646 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.646 [2024-07-26 06:00:21.870210] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:11.211 06:00:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:11.211 06:00:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 36933 00:08:11.211 06:00:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:11.483 06:00:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:11.483 06:00:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 36933 00:08:11.483 06:00:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:12.075 06:00:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:12.075 06:00:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 36933 00:08:12.075 06:00:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:12.641 06:00:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:12.641 06:00:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 36933 00:08:12.641 06:00:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:13.210 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:13.210 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 36933 00:08:13.210 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:13.467 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:13.467 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 36933 00:08:13.467 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:13.727 Initializing NVMe Controllers 00:08:13.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:13.727 Controller IO queue size 128, less than required. 00:08:13.727 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:13.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:13.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:13.727 Initialization complete. Launching workers. 00:08:13.727 ======================================================== 00:08:13.727 Latency(us) 00:08:13.727 Device Information : IOPS MiB/s Average min max 00:08:13.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005527.73 1000272.86 1042134.89 00:08:13.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005488.89 1000330.65 1013883.92 00:08:13.727 ======================================================== 00:08:13.727 Total : 256.00 0.12 1005508.31 1000272.86 1042134.89 00:08:13.727 00:08:13.986 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:13.986 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 36933 00:08:13.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (36933) - No such process 00:08:13.986 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 36933 00:08:13.986 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:13.986 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:13.986 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:13.986 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:13.986 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:13.986 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:13.986 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:13.986 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:13.986 rmmod nvme_tcp 00:08:14.244 rmmod nvme_fabrics 00:08:14.244 rmmod nvme_keyring 00:08:14.244 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:14.244 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:14.244 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:14.244 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 36373 ']' 00:08:14.244 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 36373 00:08:14.244 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 36373 ']' 00:08:14.244 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 36373 00:08:14.244 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:14.244 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:14.244 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 36373 00:08:14.244 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:14.244 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:14.244 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 36373' 00:08:14.244 killing process with pid 36373 00:08:14.244 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 36373 00:08:14.244 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 36373 00:08:15.621 06:00:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:15.621 06:00:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:15.621 06:00:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:15.621 06:00:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:15.621 06:00:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:15.621 06:00:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.621 06:00:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.621 06:00:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:17.528 00:08:17.528 real 0m13.906s 00:08:17.528 user 0m30.444s 00:08:17.528 sys 0m3.106s 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:17.528 ************************************ 00:08:17.528 END TEST nvmf_delete_subsystem 00:08:17.528 ************************************ 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:17.528 ************************************ 00:08:17.528 START TEST nvmf_host_management 00:08:17.528 ************************************ 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:17.528 * Looking for test storage... 00:08:17.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.528 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.788 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.788 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.788 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.788 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.788 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.788 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.789 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:17.789 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.789 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:17.789 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:17.789 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:17.789 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.789 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.789 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.789 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:17.789 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:17.789 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:17.789 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:17.789 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:17.789 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:17.789 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:17.789 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.789 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:17.789 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:17.789 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:17.789 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.789 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.789 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.789 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:17.789 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:17.789 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:08:17.789 06:00:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:19.696 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.696 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:19.697 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:19.697 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:19.697 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:19.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:08:19.697 00:08:19.697 --- 10.0.0.2 ping statistics --- 00:08:19.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.697 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:08:19.697 00:08:19.697 --- 10.0.0.1 ping statistics --- 00:08:19.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.697 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=39414 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 39414 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 39414 ']' 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:19.697 06:00:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:19.697 [2024-07-26 06:00:30.985101] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:19.697 [2024-07-26 06:00:30.985233] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.964 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.964 [2024-07-26 06:00:31.121949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:20.225 [2024-07-26 06:00:31.361145] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.225 [2024-07-26 06:00:31.361216] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.225 [2024-07-26 06:00:31.361244] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:20.225 [2024-07-26 06:00:31.361266] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:20.225 [2024-07-26 06:00:31.361288] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.225 [2024-07-26 06:00:31.361434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.225 [2024-07-26 06:00:31.361518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.225 [2024-07-26 06:00:31.361579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.225 [2024-07-26 06:00:31.361590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:20.791 06:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:20.791 06:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:20.792 06:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:20.792 06:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:20.792 06:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.792 06:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.792 06:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:20.792 06:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.792 06:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.792 [2024-07-26 06:00:31.910801] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.792 06:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.792 06:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:20.792 06:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:20.792 06:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.792 06:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:20.792 06:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:20.792 06:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:20.792 06:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.792 06:00:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.792 Malloc0 00:08:20.792 [2024-07-26 06:00:32.023838] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.792 06:00:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.792 06:00:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:20.792 06:00:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:20.792 06:00:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.792 06:00:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=39589 00:08:20.792 06:00:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 39589 /var/tmp/bdevperf.sock 00:08:20.792 06:00:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 39589 ']' 00:08:20.792 06:00:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:20.792 06:00:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:20.792 06:00:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:20.792 06:00:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:20.792 06:00:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:20.792 06:00:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:20.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:20.792 06:00:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:20.792 06:00:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:20.792 06:00:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:20.792 06:00:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.792 06:00:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:20.792 { 00:08:20.792 "params": { 00:08:20.792 "name": "Nvme$subsystem", 00:08:20.792 "trtype": "$TEST_TRANSPORT", 00:08:20.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:20.792 "adrfam": "ipv4", 00:08:20.792 "trsvcid": "$NVMF_PORT", 00:08:20.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:20.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:20.792 "hdgst": ${hdgst:-false}, 00:08:20.792 "ddgst": ${ddgst:-false} 00:08:20.792 }, 00:08:20.792 "method": "bdev_nvme_attach_controller" 00:08:20.792 } 00:08:20.792 EOF 00:08:20.792 )") 00:08:20.792 06:00:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:20.792 06:00:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:20.792 06:00:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:20.792 06:00:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:20.792 "params": { 00:08:20.792 "name": "Nvme0", 00:08:20.792 "trtype": "tcp", 00:08:20.792 "traddr": "10.0.0.2", 00:08:20.792 "adrfam": "ipv4", 00:08:20.792 "trsvcid": "4420", 00:08:20.792 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:20.792 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:20.792 "hdgst": false, 00:08:20.792 "ddgst": false 00:08:20.792 }, 00:08:20.792 "method": "bdev_nvme_attach_controller" 00:08:20.792 }' 00:08:21.050 [2024-07-26 06:00:32.139846] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:21.050 [2024-07-26 06:00:32.139997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid39589 ] 00:08:21.050 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.050 [2024-07-26 06:00:32.263023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.310 [2024-07-26 06:00:32.507594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.881 Running I/O for 10 seconds... 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=131 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 131 -ge 100 ']' 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.881 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:21.881 [2024-07-26 06:00:33.127547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.881 [2024-07-26 06:00:33.127652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.881 [2024-07-26 06:00:33.127676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.881 [2024-07-26 06:00:33.127695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.881 [2024-07-26 06:00:33.127714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.881 [2024-07-26 06:00:33.127732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.881 [2024-07-26 06:00:33.127750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.881 [2024-07-26 06:00:33.127768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.881 [2024-07-26 06:00:33.127786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.881 [2024-07-26 06:00:33.127805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.881 [2024-07-26 06:00:33.127823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.881 [2024-07-26 06:00:33.127841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.127860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.127878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.127896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.127915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.127933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.127951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.127969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.127988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.128006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.128024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.128055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.128085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.128104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.128133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.128150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.128168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.128186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.128204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.128222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.128240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.128258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.128276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.128294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.128312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.128329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.128348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.128377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 [2024-07-26 06:00:33.128396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:08:21.882 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.882 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:21.882 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.882 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:21.882 [2024-07-26 06:00:33.134885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.882 [2024-07-26 06:00:33.134953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.882 [2024-07-26 06:00:33.134982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.882 [2024-07-26 06:00:33.135004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.882 [2024-07-26 06:00:33.135025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.882 [2024-07-26 06:00:33.135050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.882 [2024-07-26 06:00:33.135091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.882 [2024-07-26 06:00:33.135125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.882 [2024-07-26 06:00:33.135146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:08:21.882 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.882 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:21.882 [2024-07-26 06:00:33.148583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:08:21.882 [2024-07-26 06:00:33.148729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.882 [2024-07-26 06:00:33.148762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.882 [2024-07-26 06:00:33.148807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.882 [2024-07-26 06:00:33.148830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.882 [2024-07-26 06:00:33.148855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.882 [2024-07-26 06:00:33.148877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.882 [2024-07-26 06:00:33.148900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.882 [2024-07-26 06:00:33.148922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.882 [2024-07-26 06:00:33.148945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.882 [2024-07-26 06:00:33.148984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.882 [2024-07-26 06:00:33.149010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.882 [2024-07-26 06:00:33.149032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.882 [2024-07-26 06:00:33.149056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.882 [2024-07-26 06:00:33.149090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.882 [2024-07-26 06:00:33.149124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.882 [2024-07-26 06:00:33.149146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.882 [2024-07-26 06:00:33.149170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.882 [2024-07-26 06:00:33.149191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.882 [2024-07-26 06:00:33.149214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.882 [2024-07-26 06:00:33.149241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.882 [2024-07-26 06:00:33.149266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.882 [2024-07-26 06:00:33.149287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.882 [2024-07-26 06:00:33.149310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.882 [2024-07-26 06:00:33.149332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.882 [2024-07-26 06:00:33.149354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.882 [2024-07-26 06:00:33.149385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.882 [2024-07-26 06:00:33.149408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.882 [2024-07-26 06:00:33.149430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.882 [2024-07-26 06:00:33.149453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.882 [2024-07-26 06:00:33.149474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.882 [2024-07-26 06:00:33.149497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.882 [2024-07-26 06:00:33.149518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.882 [2024-07-26 06:00:33.149541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.882 [2024-07-26 06:00:33.149562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.882 [2024-07-26 06:00:33.149585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.882 [2024-07-26 06:00:33.149606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.149629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.149650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.149673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.149693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.149716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.149737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.149760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.149786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.149810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.149831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.149855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.149876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.149899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.149920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.149943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.149964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.149986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.150007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.150030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.150051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.150084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.150112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.150135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.150157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.150180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.150201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.150224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.150244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.150267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.150288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.150311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.150331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.150372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.150394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.150417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.150438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.150462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.150483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.150505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.150527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.150550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.150571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.150594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.150615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.150637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.150658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.150681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.150702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.150725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.150746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.150768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.150790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.150812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.150833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.150856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.150877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.150900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.150925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.150949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.150970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.150993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.151013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.151036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.151057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.151110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.151134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.151157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.151179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.151202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.151223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.151245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.151266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.883 [2024-07-26 06:00:33.151289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.883 [2024-07-26 06:00:33.151310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.884 [2024-07-26 06:00:33.151333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.884 [2024-07-26 06:00:33.151354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.884 [2024-07-26 06:00:33.151376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.884 [2024-07-26 06:00:33.151401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.884 [2024-07-26 06:00:33.151424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.884 [2024-07-26 06:00:33.151445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.884 [2024-07-26 06:00:33.151468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.884 [2024-07-26 06:00:33.151489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.884 [2024-07-26 06:00:33.151516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.884 [2024-07-26 06:00:33.151538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.884 [2024-07-26 06:00:33.151560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.884 [2024-07-26 06:00:33.151581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.884 [2024-07-26 06:00:33.151604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.884 [2024-07-26 06:00:33.151625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.884 [2024-07-26 06:00:33.151648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.884 [2024-07-26 06:00:33.151670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.884 [2024-07-26 06:00:33.151693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.884 [2024-07-26 06:00:33.151714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.884 [2024-07-26 06:00:33.152025] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2c80 was disconnected and freed. reset controller. 00:08:21.884 [2024-07-26 06:00:33.153279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:21.884 task offset: 24320 on job bdev=Nvme0n1 fails 00:08:21.884 00:08:21.884 Latency(us) 00:08:21.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.884 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:21.884 Job: Nvme0n1 ended in about 0.17 seconds with error 00:08:21.884 Verification LBA range: start 0x0 length 0x400 00:08:21.884 Nvme0n1 : 0.17 1095.04 68.44 368.86 0.00 41025.75 3859.34 40777.96 00:08:21.884 =================================================================================================================== 00:08:21.884 Total : 1095.04 68.44 368.86 0.00 41025.75 3859.34 40777.96 00:08:21.884 [2024-07-26 06:00:33.158295] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:22.147 [2024-07-26 06:00:33.213287] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:23.083 06:00:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 39589 00:08:23.083 06:00:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:23.083 06:00:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:23.083 06:00:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:23.083 06:00:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:23.083 06:00:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:23.083 06:00:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:23.083 06:00:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:23.083 { 00:08:23.083 "params": { 00:08:23.083 "name": "Nvme$subsystem", 00:08:23.083 "trtype": "$TEST_TRANSPORT", 00:08:23.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:23.083 "adrfam": "ipv4", 00:08:23.083 "trsvcid": "$NVMF_PORT", 00:08:23.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:23.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:23.083 "hdgst": ${hdgst:-false}, 00:08:23.083 "ddgst": ${ddgst:-false} 00:08:23.083 }, 00:08:23.083 "method": "bdev_nvme_attach_controller" 00:08:23.083 } 00:08:23.083 EOF 00:08:23.083 )") 00:08:23.083 06:00:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:23.083 06:00:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:23.083 06:00:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:23.083 06:00:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:23.083 "params": { 00:08:23.083 "name": "Nvme0", 00:08:23.083 "trtype": "tcp", 00:08:23.083 "traddr": "10.0.0.2", 00:08:23.083 "adrfam": "ipv4", 00:08:23.083 "trsvcid": "4420", 00:08:23.083 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:23.083 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:23.083 "hdgst": false, 00:08:23.083 "ddgst": false 00:08:23.083 }, 00:08:23.083 "method": "bdev_nvme_attach_controller" 00:08:23.083 }' 00:08:23.083 [2024-07-26 06:00:34.221895] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:23.083 [2024-07-26 06:00:34.222034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid39868 ] 00:08:23.083 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.083 [2024-07-26 06:00:34.343675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.342 [2024-07-26 06:00:34.591004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.911 Running I/O for 1 seconds... 00:08:25.288 00:08:25.288 Latency(us) 00:08:25.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.288 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:25.288 Verification LBA range: start 0x0 length 0x400 00:08:25.288 Nvme0n1 : 1.04 1292.03 80.75 0.00 0.00 48711.54 10048.85 40001.23 00:08:25.288 =================================================================================================================== 00:08:25.288 Total : 1292.03 80.75 0.00 0.00 48711.54 10048.85 40001.23 00:08:25.856 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:08:26.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 39589 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:08:26.114 06:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:26.114 06:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:26.114 06:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:26.114 06:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:26.115 06:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:26.115 06:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:26.115 06:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:26.115 06:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:26.115 06:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:26.115 06:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:26.115 06:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:26.115 rmmod nvme_tcp 00:08:26.115 rmmod nvme_fabrics 00:08:26.115 rmmod nvme_keyring 00:08:26.115 06:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:26.115 06:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:26.115 06:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:26.115 06:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 39414 ']' 00:08:26.115 06:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 39414 00:08:26.115 06:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 39414 ']' 00:08:26.115 06:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 39414 00:08:26.115 06:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:26.115 06:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:26.115 06:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 39414 00:08:26.115 06:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:26.115 06:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:26.115 06:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 39414' 00:08:26.115 killing process with pid 39414 00:08:26.115 06:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 39414 00:08:26.115 06:00:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 39414 00:08:27.522 [2024-07-26 06:00:38.607597] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:27.522 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:08:27.522 06:00:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:27.522 06:00:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:27.522 06:00:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:27.522 06:00:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:27.522 06:00:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:27.522 06:00:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.522 06:00:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.522 06:00:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.431 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:29.431 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:29.431 00:08:29.431 real 0m11.936s 00:08:29.431 user 0m32.759s 00:08:29.431 sys 0m3.092s 00:08:29.431 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.431 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.431 ************************************ 00:08:29.431 END TEST nvmf_host_management 00:08:29.431 ************************************ 00:08:29.431 06:00:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:29.431 06:00:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:29.431 06:00:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.431 06:00:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:29.697 ************************************ 00:08:29.697 START TEST nvmf_lvol 00:08:29.697 ************************************ 00:08:29.697 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:29.697 * Looking for test storage... 00:08:29.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.697 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.697 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:29.697 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.697 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.697 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.697 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.697 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.697 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.697 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.697 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.697 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.697 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.697 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:29.697 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:29.697 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.697 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.697 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.697 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:08:29.698 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:31.600 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.600 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:08:31.600 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:31.600 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:31.600 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:31.600 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:31.600 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:31.600 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:08:31.600 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:31.600 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:08:31.600 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:08:31.600 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:08:31.600 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:08:31.600 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:08:31.600 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:08:31.600 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.600 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.600 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.600 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.600 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.600 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.600 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.600 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.600 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:31.601 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:31.601 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:31.601 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:31.601 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:31.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:08:31.601 00:08:31.601 --- 10.0.0.2 ping statistics --- 00:08:31.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.601 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:08:31.601 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:08:31.861 00:08:31.861 --- 10.0.0.1 ping statistics --- 00:08:31.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.861 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:08:31.861 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.861 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:08:31.861 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.861 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.861 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:31.861 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:31.861 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.861 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:31.861 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:31.861 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:31.861 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:31.861 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:31.861 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:31.861 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=42329 00:08:31.861 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:31.861 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 42329 00:08:31.861 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 42329 ']' 00:08:31.861 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.861 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.861 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.861 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.861 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:31.861 [2024-07-26 06:00:43.054121] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:31.861 [2024-07-26 06:00:43.054271] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.861 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.861 [2024-07-26 06:00:43.188163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:32.121 [2024-07-26 06:00:43.449004] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.121 [2024-07-26 06:00:43.449098] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.121 [2024-07-26 06:00:43.449138] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.121 [2024-07-26 06:00:43.449161] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.121 [2024-07-26 06:00:43.449183] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.121 [2024-07-26 06:00:43.449322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.121 [2024-07-26 06:00:43.449379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.121 [2024-07-26 06:00:43.449389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.687 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:32.687 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:32.687 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:32.687 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:32.687 06:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:32.687 06:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.687 06:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:32.945 [2024-07-26 06:00:44.226855] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.945 06:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:33.513 06:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:33.513 06:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:33.771 06:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:33.771 06:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:34.029 06:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:34.288 06:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=72eb9360-4d5f-464f-a88e-2454fbc00330 00:08:34.288 06:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 72eb9360-4d5f-464f-a88e-2454fbc00330 lvol 20 00:08:34.546 06:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e6dca600-f09a-4fee-a9e9-d66688176528 00:08:34.546 06:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:34.808 06:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e6dca600-f09a-4fee-a9e9-d66688176528 00:08:35.068 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:35.326 [2024-07-26 06:00:46.452932] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:35.326 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:35.583 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=42769 00:08:35.583 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:35.583 06:00:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:35.583 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.517 06:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e6dca600-f09a-4fee-a9e9-d66688176528 MY_SNAPSHOT 00:08:36.775 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3cc64774-288b-420c-8685-7c224a55575a 00:08:36.775 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e6dca600-f09a-4fee-a9e9-d66688176528 30 00:08:37.344 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 3cc64774-288b-420c-8685-7c224a55575a MY_CLONE 00:08:37.602 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=147ca043-1e34-450f-b18d-6eeba3b3f1bd 00:08:37.602 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 147ca043-1e34-450f-b18d-6eeba3b3f1bd 00:08:38.170 06:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 42769 00:08:46.288 Initializing NVMe Controllers 00:08:46.288 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:46.288 Controller IO queue size 128, less than required. 00:08:46.288 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:46.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:46.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:46.288 Initialization complete. Launching workers. 00:08:46.288 ======================================================== 00:08:46.288 Latency(us) 00:08:46.288 Device Information : IOPS MiB/s Average min max 00:08:46.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7934.93 31.00 16151.40 562.80 168165.22 00:08:46.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8159.31 31.87 15706.35 3674.24 199814.96 00:08:46.288 ======================================================== 00:08:46.288 Total : 16094.24 62.87 15925.77 562.80 199814.96 00:08:46.288 00:08:46.288 06:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:46.288 06:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e6dca600-f09a-4fee-a9e9-d66688176528 00:08:46.549 06:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 72eb9360-4d5f-464f-a88e-2454fbc00330 00:08:46.831 06:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:46.831 06:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:46.831 06:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:46.831 06:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:46.831 06:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:46.831 06:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:46.831 06:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:46.831 06:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:46.831 06:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:46.831 rmmod nvme_tcp 00:08:46.831 rmmod nvme_fabrics 00:08:46.831 rmmod nvme_keyring 00:08:46.831 06:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:46.831 06:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:46.831 06:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:46.831 06:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 42329 ']' 00:08:46.831 06:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 42329 00:08:46.831 06:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 42329 ']' 00:08:46.831 06:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 42329 00:08:46.831 06:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:46.831 06:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:46.831 06:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 42329 00:08:46.832 06:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:46.832 06:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:46.832 06:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 42329' 00:08:46.832 killing process with pid 42329 00:08:46.832 06:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 42329 00:08:46.832 06:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 42329 00:08:48.215 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:08:48.474 06:00:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:48.474 06:00:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:48.474 06:00:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:48.474 06:00:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:48.474 06:00:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:48.474 06:00:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.474 06:00:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.474 06:00:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.382 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:50.382 00:08:50.382 real 0m20.832s 00:08:50.382 user 1m8.349s 00:08:50.382 sys 0m5.963s 00:08:50.382 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.382 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:50.382 ************************************ 00:08:50.382 END TEST nvmf_lvol 00:08:50.382 ************************************ 00:08:50.382 06:01:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:50.382 06:01:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:50.382 06:01:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.382 06:01:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:50.382 ************************************ 00:08:50.382 START TEST nvmf_lvs_grow 00:08:50.382 ************************************ 00:08:50.382 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:50.382 * Looking for test storage... 00:08:50.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:50.382 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:50.382 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:50.382 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.382 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.382 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.382 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.382 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.382 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.382 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.382 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.382 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.382 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.642 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:50.642 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:50.642 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.642 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.642 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:50.642 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.642 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:50.642 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.642 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.642 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.642 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:08:50.643 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:52.550 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:52.550 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:52.550 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:52.550 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:52.551 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:52.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:08:52.551 00:08:52.551 --- 10.0.0.2 ping statistics --- 00:08:52.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.551 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:52.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:08:52.551 00:08:52.551 --- 10.0.0.1 ping statistics --- 00:08:52.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.551 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=46167 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 46167 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 46167 ']' 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:52.551 06:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:52.551 [2024-07-26 06:01:03.816724] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:52.551 [2024-07-26 06:01:03.816878] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.809 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.809 [2024-07-26 06:01:03.956896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.069 [2024-07-26 06:01:04.214893] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.069 [2024-07-26 06:01:04.214984] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.069 [2024-07-26 06:01:04.215012] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.069 [2024-07-26 06:01:04.215040] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.069 [2024-07-26 06:01:04.215070] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.069 [2024-07-26 06:01:04.215143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.637 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:53.637 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:53.637 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:53.637 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:53.637 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:53.637 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.637 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:53.895 [2024-07-26 06:01:05.029679] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.895 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:53.895 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:53.895 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.895 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:53.895 ************************************ 00:08:53.895 START TEST lvs_grow_clean 00:08:53.895 ************************************ 00:08:53.895 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:53.895 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:53.895 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:53.895 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:53.895 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:53.895 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:53.895 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:53.895 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:53.895 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:53.895 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:54.153 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:54.153 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:54.410 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3d14c47e-9450-4285-8d81-1098778f04c2 00:08:54.411 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d14c47e-9450-4285-8d81-1098778f04c2 00:08:54.411 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:54.669 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:54.669 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:54.669 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3d14c47e-9450-4285-8d81-1098778f04c2 lvol 150 00:08:54.928 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0d4cccad-fe03-40db-8b25-5f0de9f8fa09 00:08:54.928 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:54.928 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:55.186 [2024-07-26 06:01:06.323680] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:55.186 [2024-07-26 06:01:06.323836] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:55.186 true 00:08:55.186 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d14c47e-9450-4285-8d81-1098778f04c2 00:08:55.186 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:55.445 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:55.445 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:55.704 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0d4cccad-fe03-40db-8b25-5f0de9f8fa09 00:08:55.963 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:56.223 [2024-07-26 06:01:07.367086] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.223 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:56.481 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=46737 00:08:56.481 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:56.481 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:56.481 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 46737 /var/tmp/bdevperf.sock 00:08:56.481 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 46737 ']' 00:08:56.481 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:56.481 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:56.481 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:56.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:56.481 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:56.481 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:56.481 [2024-07-26 06:01:07.704281] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:56.482 [2024-07-26 06:01:07.704453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46737 ] 00:08:56.482 EAL: No free 2048 kB hugepages reported on node 1 00:08:56.741 [2024-07-26 06:01:07.834752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.001 [2024-07-26 06:01:08.089915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.568 06:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:57.568 06:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:57.569 06:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:57.827 Nvme0n1 00:08:57.827 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:58.085 [ 00:08:58.085 { 00:08:58.085 "name": "Nvme0n1", 00:08:58.085 "aliases": [ 00:08:58.085 "0d4cccad-fe03-40db-8b25-5f0de9f8fa09" 00:08:58.085 ], 00:08:58.085 "product_name": "NVMe disk", 00:08:58.085 "block_size": 4096, 00:08:58.085 "num_blocks": 38912, 00:08:58.085 "uuid": "0d4cccad-fe03-40db-8b25-5f0de9f8fa09", 00:08:58.085 "assigned_rate_limits": { 00:08:58.085 "rw_ios_per_sec": 0, 00:08:58.085 "rw_mbytes_per_sec": 0, 00:08:58.085 "r_mbytes_per_sec": 0, 00:08:58.085 "w_mbytes_per_sec": 0 00:08:58.085 }, 00:08:58.085 "claimed": false, 00:08:58.085 "zoned": false, 00:08:58.085 "supported_io_types": { 00:08:58.085 "read": true, 00:08:58.085 "write": true, 00:08:58.085 "unmap": true, 00:08:58.085 "flush": true, 00:08:58.085 "reset": true, 00:08:58.085 "nvme_admin": true, 00:08:58.085 "nvme_io": true, 00:08:58.085 "nvme_io_md": false, 00:08:58.085 "write_zeroes": true, 00:08:58.085 "zcopy": false, 00:08:58.085 "get_zone_info": false, 00:08:58.085 "zone_management": false, 00:08:58.085 "zone_append": false, 00:08:58.085 "compare": true, 00:08:58.085 "compare_and_write": true, 00:08:58.085 "abort": true, 00:08:58.085 "seek_hole": false, 00:08:58.085 "seek_data": false, 00:08:58.085 "copy": true, 00:08:58.085 "nvme_iov_md": false 00:08:58.085 }, 00:08:58.085 "memory_domains": [ 00:08:58.085 { 00:08:58.085 "dma_device_id": "system", 00:08:58.085 "dma_device_type": 1 00:08:58.085 } 00:08:58.085 ], 00:08:58.085 "driver_specific": { 00:08:58.085 "nvme": [ 00:08:58.085 { 00:08:58.085 "trid": { 00:08:58.085 "trtype": "TCP", 00:08:58.085 "adrfam": "IPv4", 00:08:58.085 "traddr": "10.0.0.2", 00:08:58.085 "trsvcid": "4420", 00:08:58.085 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:58.085 }, 00:08:58.085 "ctrlr_data": { 00:08:58.085 "cntlid": 1, 00:08:58.085 "vendor_id": "0x8086", 00:08:58.085 "model_number": "SPDK bdev Controller", 00:08:58.085 "serial_number": "SPDK0", 00:08:58.085 "firmware_revision": "24.09", 00:08:58.085 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:58.085 "oacs": { 00:08:58.085 "security": 0, 00:08:58.085 "format": 0, 00:08:58.085 "firmware": 0, 00:08:58.085 "ns_manage": 0 00:08:58.085 }, 00:08:58.085 "multi_ctrlr": true, 00:08:58.085 "ana_reporting": false 00:08:58.085 }, 00:08:58.085 "vs": { 00:08:58.085 "nvme_version": "1.3" 00:08:58.085 }, 00:08:58.085 "ns_data": { 00:08:58.085 "id": 1, 00:08:58.085 "can_share": true 00:08:58.085 } 00:08:58.085 } 00:08:58.085 ], 00:08:58.085 "mp_policy": "active_passive" 00:08:58.085 } 00:08:58.085 } 00:08:58.085 ] 00:08:58.085 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=46886 00:08:58.085 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:58.085 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:58.344 Running I/O for 10 seconds... 00:08:59.282 Latency(us) 00:08:59.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.282 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.282 Nvme0n1 : 1.00 10542.00 41.18 0.00 0.00 0.00 0.00 0.00 00:08:59.282 =================================================================================================================== 00:08:59.282 Total : 10542.00 41.18 0.00 0.00 0.00 0.00 0.00 00:08:59.282 00:09:00.220 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3d14c47e-9450-4285-8d81-1098778f04c2 00:09:00.220 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.220 Nvme0n1 : 2.00 10668.50 41.67 0.00 0.00 0.00 0.00 0.00 00:09:00.220 =================================================================================================================== 00:09:00.220 Total : 10668.50 41.67 0.00 0.00 0.00 0.00 0.00 00:09:00.220 00:09:00.478 true 00:09:00.478 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d14c47e-9450-4285-8d81-1098778f04c2 00:09:00.478 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:00.737 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:00.737 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:00.737 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 46886 00:09:01.312 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.312 Nvme0n1 : 3.00 10753.00 42.00 0.00 0.00 0.00 0.00 0.00 00:09:01.312 =================================================================================================================== 00:09:01.312 Total : 10753.00 42.00 0.00 0.00 0.00 0.00 0.00 00:09:01.312 00:09:02.285 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.285 Nvme0n1 : 4.00 10827.00 42.29 0.00 0.00 0.00 0.00 0.00 00:09:02.285 =================================================================================================================== 00:09:02.285 Total : 10827.00 42.29 0.00 0.00 0.00 0.00 0.00 00:09:02.285 00:09:03.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.222 Nvme0n1 : 5.00 10846.00 42.37 0.00 0.00 0.00 0.00 0.00 00:09:03.222 =================================================================================================================== 00:09:03.222 Total : 10846.00 42.37 0.00 0.00 0.00 0.00 0.00 00:09:03.222 00:09:04.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.604 Nvme0n1 : 6.00 10858.67 42.42 0.00 0.00 0.00 0.00 0.00 00:09:04.604 =================================================================================================================== 00:09:04.604 Total : 10858.67 42.42 0.00 0.00 0.00 0.00 0.00 00:09:04.604 00:09:05.543 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.543 Nvme0n1 : 7.00 10867.71 42.45 0.00 0.00 0.00 0.00 0.00 00:09:05.543 =================================================================================================================== 00:09:05.543 Total : 10867.71 42.45 0.00 0.00 0.00 0.00 0.00 00:09:05.543 00:09:06.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.481 Nvme0n1 : 8.00 10906.25 42.60 0.00 0.00 0.00 0.00 0.00 00:09:06.481 =================================================================================================================== 00:09:06.481 Total : 10906.25 42.60 0.00 0.00 0.00 0.00 0.00 00:09:06.481 00:09:07.418 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.418 Nvme0n1 : 9.00 10936.22 42.72 0.00 0.00 0.00 0.00 0.00 00:09:07.418 =================================================================================================================== 00:09:07.418 Total : 10936.22 42.72 0.00 0.00 0.00 0.00 0.00 00:09:07.418 00:09:08.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.356 Nvme0n1 : 10.00 10949.20 42.77 0.00 0.00 0.00 0.00 0.00 00:09:08.356 =================================================================================================================== 00:09:08.356 Total : 10949.20 42.77 0.00 0.00 0.00 0.00 0.00 00:09:08.356 00:09:08.356 00:09:08.356 Latency(us) 00:09:08.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.356 Nvme0n1 : 10.00 10956.57 42.80 0.00 0.00 11674.89 7864.32 22816.24 00:09:08.356 =================================================================================================================== 00:09:08.356 Total : 10956.57 42.80 0.00 0.00 11674.89 7864.32 22816.24 00:09:08.356 0 00:09:08.356 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 46737 00:09:08.356 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 46737 ']' 00:09:08.356 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 46737 00:09:08.356 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:08.356 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:08.356 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 46737 00:09:08.356 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:08.356 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:08.356 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 46737' 00:09:08.356 killing process with pid 46737 00:09:08.356 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 46737 00:09:08.356 Received shutdown signal, test time was about 10.000000 seconds 00:09:08.356 00:09:08.356 Latency(us) 00:09:08.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.356 =================================================================================================================== 00:09:08.356 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:08.356 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 46737 00:09:09.293 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:09:09.551 06:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:09.812 06:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:10.071 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d14c47e-9450-4285-8d81-1098778f04c2 00:09:10.071 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:10.331 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:10.331 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:10.331 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:10.331 [2024-07-26 06:01:21.643101] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:10.592 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d14c47e-9450-4285-8d81-1098778f04c2 00:09:10.592 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:10.592 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d14c47e-9450-4285-8d81-1098778f04c2 00:09:10.592 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:10.592 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:10.592 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:10.592 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:10.592 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:10.592 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:10.592 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:10.592 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:10.592 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d14c47e-9450-4285-8d81-1098778f04c2 00:09:10.592 request: 00:09:10.592 { 00:09:10.592 "uuid": "3d14c47e-9450-4285-8d81-1098778f04c2", 00:09:10.592 "method": "bdev_lvol_get_lvstores", 00:09:10.592 "req_id": 1 00:09:10.592 } 00:09:10.592 Got JSON-RPC error response 00:09:10.592 response: 00:09:10.592 { 00:09:10.592 "code": -19, 00:09:10.592 "message": "No such device" 00:09:10.592 } 00:09:10.851 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:10.851 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:10.851 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:10.851 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:10.851 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:10.851 aio_bdev 00:09:11.109 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0d4cccad-fe03-40db-8b25-5f0de9f8fa09 00:09:11.109 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=0d4cccad-fe03-40db-8b25-5f0de9f8fa09 00:09:11.109 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:11.109 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:11.109 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:11.109 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:11.109 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:11.369 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0d4cccad-fe03-40db-8b25-5f0de9f8fa09 -t 2000 00:09:11.629 [ 00:09:11.629 { 00:09:11.629 "name": "0d4cccad-fe03-40db-8b25-5f0de9f8fa09", 00:09:11.629 "aliases": [ 00:09:11.629 "lvs/lvol" 00:09:11.629 ], 00:09:11.629 "product_name": "Logical Volume", 00:09:11.629 "block_size": 4096, 00:09:11.629 "num_blocks": 38912, 00:09:11.629 "uuid": "0d4cccad-fe03-40db-8b25-5f0de9f8fa09", 00:09:11.629 "assigned_rate_limits": { 00:09:11.629 "rw_ios_per_sec": 0, 00:09:11.629 "rw_mbytes_per_sec": 0, 00:09:11.629 "r_mbytes_per_sec": 0, 00:09:11.629 "w_mbytes_per_sec": 0 00:09:11.629 }, 00:09:11.629 "claimed": false, 00:09:11.629 "zoned": false, 00:09:11.629 "supported_io_types": { 00:09:11.629 "read": true, 00:09:11.629 "write": true, 00:09:11.629 "unmap": true, 00:09:11.629 "flush": false, 00:09:11.629 "reset": true, 00:09:11.629 "nvme_admin": false, 00:09:11.629 "nvme_io": false, 00:09:11.629 "nvme_io_md": false, 00:09:11.629 "write_zeroes": true, 00:09:11.629 "zcopy": false, 00:09:11.629 "get_zone_info": false, 00:09:11.629 "zone_management": false, 00:09:11.629 "zone_append": false, 00:09:11.629 "compare": false, 00:09:11.629 "compare_and_write": false, 00:09:11.629 "abort": false, 00:09:11.629 "seek_hole": true, 00:09:11.629 "seek_data": true, 00:09:11.629 "copy": false, 00:09:11.629 "nvme_iov_md": false 00:09:11.629 }, 00:09:11.629 "driver_specific": { 00:09:11.629 "lvol": { 00:09:11.629 "lvol_store_uuid": "3d14c47e-9450-4285-8d81-1098778f04c2", 00:09:11.629 "base_bdev": "aio_bdev", 00:09:11.629 "thin_provision": false, 00:09:11.629 "num_allocated_clusters": 38, 00:09:11.629 "snapshot": false, 00:09:11.629 "clone": false, 00:09:11.629 "esnap_clone": false 00:09:11.629 } 00:09:11.629 } 00:09:11.629 } 00:09:11.629 ] 00:09:11.629 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:11.629 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d14c47e-9450-4285-8d81-1098778f04c2 00:09:11.629 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:11.889 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:11.889 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d14c47e-9450-4285-8d81-1098778f04c2 00:09:11.889 06:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:11.889 06:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:11.890 06:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0d4cccad-fe03-40db-8b25-5f0de9f8fa09 00:09:12.162 06:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3d14c47e-9450-4285-8d81-1098778f04c2 00:09:12.427 06:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:12.685 06:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:12.685 00:09:12.685 real 0m18.904s 00:09:12.685 user 0m18.532s 00:09:12.685 sys 0m1.955s 00:09:12.685 06:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.685 06:01:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:12.685 ************************************ 00:09:12.685 END TEST lvs_grow_clean 00:09:12.685 ************************************ 00:09:12.685 06:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:12.685 06:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:12.685 06:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.685 06:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:12.942 ************************************ 00:09:12.942 START TEST lvs_grow_dirty 00:09:12.942 ************************************ 00:09:12.942 06:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:12.942 06:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:12.942 06:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:12.942 06:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:12.942 06:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:12.942 06:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:12.942 06:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:12.942 06:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:12.942 06:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:12.942 06:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:13.199 06:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:13.199 06:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:13.456 06:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=db525151-0d7c-4c0e-8dae-9a1b0dbdad83 00:09:13.456 06:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db525151-0d7c-4c0e-8dae-9a1b0dbdad83 00:09:13.456 06:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:13.713 06:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:13.713 06:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:13.713 06:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u db525151-0d7c-4c0e-8dae-9a1b0dbdad83 lvol 150 00:09:13.972 06:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f4ab4108-1525-447c-9314-71237bd2da85 00:09:13.972 06:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:13.972 06:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:14.230 [2024-07-26 06:01:25.334721] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:14.230 [2024-07-26 06:01:25.334862] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:14.230 true 00:09:14.230 06:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db525151-0d7c-4c0e-8dae-9a1b0dbdad83 00:09:14.230 06:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:14.488 06:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:14.488 06:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:14.746 06:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f4ab4108-1525-447c-9314-71237bd2da85 00:09:15.006 06:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:15.006 [2024-07-26 06:01:26.326085] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.264 06:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:15.264 06:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=48939 00:09:15.264 06:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:15.264 06:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:15.264 06:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 48939 /var/tmp/bdevperf.sock 00:09:15.264 06:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 48939 ']' 00:09:15.264 06:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:15.264 06:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:15.264 06:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:15.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:15.264 06:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:15.264 06:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:15.526 [2024-07-26 06:01:26.670973] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:15.526 [2024-07-26 06:01:26.671146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48939 ] 00:09:15.526 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.526 [2024-07-26 06:01:26.809526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.804 [2024-07-26 06:01:27.067864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.382 06:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:16.382 06:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:16.382 06:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:16.950 Nvme0n1 00:09:16.950 06:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:16.950 [ 00:09:16.950 { 00:09:16.950 "name": "Nvme0n1", 00:09:16.950 "aliases": [ 00:09:16.950 "f4ab4108-1525-447c-9314-71237bd2da85" 00:09:16.950 ], 00:09:16.950 "product_name": "NVMe disk", 00:09:16.950 "block_size": 4096, 00:09:16.950 "num_blocks": 38912, 00:09:16.950 "uuid": "f4ab4108-1525-447c-9314-71237bd2da85", 00:09:16.950 "assigned_rate_limits": { 00:09:16.950 "rw_ios_per_sec": 0, 00:09:16.950 "rw_mbytes_per_sec": 0, 00:09:16.950 "r_mbytes_per_sec": 0, 00:09:16.950 "w_mbytes_per_sec": 0 00:09:16.950 }, 00:09:16.950 "claimed": false, 00:09:16.950 "zoned": false, 00:09:16.950 "supported_io_types": { 00:09:16.950 "read": true, 00:09:16.950 "write": true, 00:09:16.950 "unmap": true, 00:09:16.950 "flush": true, 00:09:16.950 "reset": true, 00:09:16.950 "nvme_admin": true, 00:09:16.950 "nvme_io": true, 00:09:16.950 "nvme_io_md": false, 00:09:16.950 "write_zeroes": true, 00:09:16.950 "zcopy": false, 00:09:16.950 "get_zone_info": false, 00:09:16.950 "zone_management": false, 00:09:16.950 "zone_append": false, 00:09:16.950 "compare": true, 00:09:16.950 "compare_and_write": true, 00:09:16.950 "abort": true, 00:09:16.950 "seek_hole": false, 00:09:16.950 "seek_data": false, 00:09:16.950 "copy": true, 00:09:16.950 "nvme_iov_md": false 00:09:16.950 }, 00:09:16.950 "memory_domains": [ 00:09:16.950 { 00:09:16.950 "dma_device_id": "system", 00:09:16.950 "dma_device_type": 1 00:09:16.950 } 00:09:16.950 ], 00:09:16.950 "driver_specific": { 00:09:16.950 "nvme": [ 00:09:16.950 { 00:09:16.950 "trid": { 00:09:16.950 "trtype": "TCP", 00:09:16.950 "adrfam": "IPv4", 00:09:16.950 "traddr": "10.0.0.2", 00:09:16.950 "trsvcid": "4420", 00:09:16.950 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:16.950 }, 00:09:16.950 "ctrlr_data": { 00:09:16.950 "cntlid": 1, 00:09:16.950 "vendor_id": "0x8086", 00:09:16.950 "model_number": "SPDK bdev Controller", 00:09:16.950 "serial_number": "SPDK0", 00:09:16.950 "firmware_revision": "24.09", 00:09:16.950 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:16.950 "oacs": { 00:09:16.950 "security": 0, 00:09:16.950 "format": 0, 00:09:16.950 "firmware": 0, 00:09:16.950 "ns_manage": 0 00:09:16.950 }, 00:09:16.950 "multi_ctrlr": true, 00:09:16.950 "ana_reporting": false 00:09:16.950 }, 00:09:16.950 "vs": { 00:09:16.950 "nvme_version": "1.3" 00:09:16.950 }, 00:09:16.950 "ns_data": { 00:09:16.950 "id": 1, 00:09:16.950 "can_share": true 00:09:16.950 } 00:09:16.950 } 00:09:16.950 ], 00:09:16.950 "mp_policy": "active_passive" 00:09:16.950 } 00:09:16.950 } 00:09:16.950 ] 00:09:16.950 06:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=49201 00:09:16.950 06:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:16.950 06:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:17.208 Running I/O for 10 seconds... 00:09:18.146 Latency(us) 00:09:18.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.146 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.146 Nvme0n1 : 1.00 11050.00 43.16 0.00 0.00 0.00 0.00 0.00 00:09:18.146 =================================================================================================================== 00:09:18.146 Total : 11050.00 43.16 0.00 0.00 0.00 0.00 0.00 00:09:18.146 00:09:19.102 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u db525151-0d7c-4c0e-8dae-9a1b0dbdad83 00:09:19.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.102 Nvme0n1 : 2.00 11176.50 43.66 0.00 0.00 0.00 0.00 0.00 00:09:19.102 =================================================================================================================== 00:09:19.102 Total : 11176.50 43.66 0.00 0.00 0.00 0.00 0.00 00:09:19.102 00:09:19.359 true 00:09:19.359 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db525151-0d7c-4c0e-8dae-9a1b0dbdad83 00:09:19.359 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:19.618 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:19.618 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:19.618 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 49201 00:09:20.185 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.185 Nvme0n1 : 3.00 11240.00 43.91 0.00 0.00 0.00 0.00 0.00 00:09:20.185 =================================================================================================================== 00:09:20.185 Total : 11240.00 43.91 0.00 0.00 0.00 0.00 0.00 00:09:20.185 00:09:21.122 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.122 Nvme0n1 : 4.00 11271.50 44.03 0.00 0.00 0.00 0.00 0.00 00:09:21.122 =================================================================================================================== 00:09:21.123 Total : 11271.50 44.03 0.00 0.00 0.00 0.00 0.00 00:09:21.123 00:09:22.498 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.498 Nvme0n1 : 5.00 11277.80 44.05 0.00 0.00 0.00 0.00 0.00 00:09:22.498 =================================================================================================================== 00:09:22.498 Total : 11277.80 44.05 0.00 0.00 0.00 0.00 0.00 00:09:22.498 00:09:23.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.437 Nvme0n1 : 6.00 11282.00 44.07 0.00 0.00 0.00 0.00 0.00 00:09:23.437 =================================================================================================================== 00:09:23.437 Total : 11282.00 44.07 0.00 0.00 0.00 0.00 0.00 00:09:23.437 00:09:24.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.375 Nvme0n1 : 7.00 11303.14 44.15 0.00 0.00 0.00 0.00 0.00 00:09:24.375 =================================================================================================================== 00:09:24.375 Total : 11303.14 44.15 0.00 0.00 0.00 0.00 0.00 00:09:24.375 00:09:25.314 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.314 Nvme0n1 : 8.00 11311.12 44.18 0.00 0.00 0.00 0.00 0.00 00:09:25.314 =================================================================================================================== 00:09:25.314 Total : 11311.12 44.18 0.00 0.00 0.00 0.00 0.00 00:09:25.314 00:09:26.251 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.251 Nvme0n1 : 9.00 11310.22 44.18 0.00 0.00 0.00 0.00 0.00 00:09:26.251 =================================================================================================================== 00:09:26.251 Total : 11310.22 44.18 0.00 0.00 0.00 0.00 0.00 00:09:26.251 00:09:27.189 00:09:27.189 Latency(us) 00:09:27.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.189 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.189 Nvme0n1 : 10.00 11313.06 44.19 0.00 0.00 11306.61 8592.50 22039.51 00:09:27.189 =================================================================================================================== 00:09:27.189 Total : 11313.06 44.19 0.00 0.00 11306.61 8592.50 22039.51 00:09:27.189 0 00:09:27.189 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 48939 00:09:27.189 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 48939 ']' 00:09:27.189 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 48939 00:09:27.189 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:27.189 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:27.189 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 48939 00:09:27.189 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:27.189 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:27.189 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 48939' 00:09:27.189 killing process with pid 48939 00:09:27.189 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 48939 00:09:27.189 Received shutdown signal, test time was about 10.000000 seconds 00:09:27.189 00:09:27.189 Latency(us) 00:09:27.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.189 =================================================================================================================== 00:09:27.189 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:27.189 06:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 48939 00:09:28.129 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:09:28.388 06:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:28.645 06:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:28.903 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db525151-0d7c-4c0e-8dae-9a1b0dbdad83 00:09:28.903 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:29.161 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:29.161 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:29.161 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 46167 00:09:29.161 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 46167 00:09:29.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 46167 Killed "${NVMF_APP[@]}" "$@" 00:09:29.161 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:29.161 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:29.161 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:29.161 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:29.161 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:29.161 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=50610 00:09:29.161 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:29.161 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 50610 00:09:29.161 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 50610 ']' 00:09:29.161 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.161 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:29.161 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.161 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:29.161 06:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:29.161 [2024-07-26 06:01:40.463740] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:29.161 [2024-07-26 06:01:40.463893] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.419 EAL: No free 2048 kB hugepages reported on node 1 00:09:29.419 [2024-07-26 06:01:40.600964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.679 [2024-07-26 06:01:40.855747] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.679 [2024-07-26 06:01:40.855838] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.679 [2024-07-26 06:01:40.855867] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.679 [2024-07-26 06:01:40.855908] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.679 [2024-07-26 06:01:40.855927] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.679 [2024-07-26 06:01:40.855985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.248 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:30.248 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:30.249 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:30.249 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:30.249 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:30.249 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.249 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:30.514 [2024-07-26 06:01:41.628212] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:30.514 [2024-07-26 06:01:41.628480] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:30.514 [2024-07-26 06:01:41.628551] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:30.514 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:30.514 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f4ab4108-1525-447c-9314-71237bd2da85 00:09:30.514 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=f4ab4108-1525-447c-9314-71237bd2da85 00:09:30.514 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:30.514 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:30.514 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:30.514 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:30.514 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:30.807 06:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f4ab4108-1525-447c-9314-71237bd2da85 -t 2000 00:09:31.067 [ 00:09:31.067 { 00:09:31.067 "name": "f4ab4108-1525-447c-9314-71237bd2da85", 00:09:31.067 "aliases": [ 00:09:31.067 "lvs/lvol" 00:09:31.067 ], 00:09:31.067 "product_name": "Logical Volume", 00:09:31.067 "block_size": 4096, 00:09:31.067 "num_blocks": 38912, 00:09:31.067 "uuid": "f4ab4108-1525-447c-9314-71237bd2da85", 00:09:31.067 "assigned_rate_limits": { 00:09:31.067 "rw_ios_per_sec": 0, 00:09:31.067 "rw_mbytes_per_sec": 0, 00:09:31.067 "r_mbytes_per_sec": 0, 00:09:31.067 "w_mbytes_per_sec": 0 00:09:31.067 }, 00:09:31.067 "claimed": false, 00:09:31.067 "zoned": false, 00:09:31.067 "supported_io_types": { 00:09:31.067 "read": true, 00:09:31.067 "write": true, 00:09:31.067 "unmap": true, 00:09:31.068 "flush": false, 00:09:31.068 "reset": true, 00:09:31.068 "nvme_admin": false, 00:09:31.068 "nvme_io": false, 00:09:31.068 "nvme_io_md": false, 00:09:31.068 "write_zeroes": true, 00:09:31.068 "zcopy": false, 00:09:31.068 "get_zone_info": false, 00:09:31.068 "zone_management": false, 00:09:31.068 "zone_append": false, 00:09:31.068 "compare": false, 00:09:31.068 "compare_and_write": false, 00:09:31.068 "abort": false, 00:09:31.068 "seek_hole": true, 00:09:31.068 "seek_data": true, 00:09:31.068 "copy": false, 00:09:31.068 "nvme_iov_md": false 00:09:31.068 }, 00:09:31.068 "driver_specific": { 00:09:31.068 "lvol": { 00:09:31.068 "lvol_store_uuid": "db525151-0d7c-4c0e-8dae-9a1b0dbdad83", 00:09:31.068 "base_bdev": "aio_bdev", 00:09:31.068 "thin_provision": false, 00:09:31.068 "num_allocated_clusters": 38, 00:09:31.068 "snapshot": false, 00:09:31.068 "clone": false, 00:09:31.068 "esnap_clone": false 00:09:31.068 } 00:09:31.068 } 00:09:31.068 } 00:09:31.068 ] 00:09:31.068 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:31.068 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:31.068 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db525151-0d7c-4c0e-8dae-9a1b0dbdad83 00:09:31.326 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:31.326 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db525151-0d7c-4c0e-8dae-9a1b0dbdad83 00:09:31.326 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:31.326 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:31.326 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:31.586 [2024-07-26 06:01:42.888596] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:31.586 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db525151-0d7c-4c0e-8dae-9a1b0dbdad83 00:09:31.586 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:31.586 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db525151-0d7c-4c0e-8dae-9a1b0dbdad83 00:09:31.586 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:31.586 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:31.586 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:31.586 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:31.586 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:31.586 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:31.586 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:31.586 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:31.586 06:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db525151-0d7c-4c0e-8dae-9a1b0dbdad83 00:09:31.845 request: 00:09:31.845 { 00:09:31.845 "uuid": "db525151-0d7c-4c0e-8dae-9a1b0dbdad83", 00:09:31.845 "method": "bdev_lvol_get_lvstores", 00:09:31.845 "req_id": 1 00:09:31.846 } 00:09:31.846 Got JSON-RPC error response 00:09:31.846 response: 00:09:31.846 { 00:09:31.846 "code": -19, 00:09:31.846 "message": "No such device" 00:09:31.846 } 00:09:32.104 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:32.104 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:32.104 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:32.104 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:32.104 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:32.362 aio_bdev 00:09:32.362 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f4ab4108-1525-447c-9314-71237bd2da85 00:09:32.362 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=f4ab4108-1525-447c-9314-71237bd2da85 00:09:32.362 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:32.362 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:32.362 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:32.362 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:32.362 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:32.621 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f4ab4108-1525-447c-9314-71237bd2da85 -t 2000 00:09:32.880 [ 00:09:32.880 { 00:09:32.880 "name": "f4ab4108-1525-447c-9314-71237bd2da85", 00:09:32.880 "aliases": [ 00:09:32.880 "lvs/lvol" 00:09:32.880 ], 00:09:32.880 "product_name": "Logical Volume", 00:09:32.880 "block_size": 4096, 00:09:32.880 "num_blocks": 38912, 00:09:32.880 "uuid": "f4ab4108-1525-447c-9314-71237bd2da85", 00:09:32.880 "assigned_rate_limits": { 00:09:32.880 "rw_ios_per_sec": 0, 00:09:32.880 "rw_mbytes_per_sec": 0, 00:09:32.880 "r_mbytes_per_sec": 0, 00:09:32.880 "w_mbytes_per_sec": 0 00:09:32.880 }, 00:09:32.880 "claimed": false, 00:09:32.880 "zoned": false, 00:09:32.880 "supported_io_types": { 00:09:32.880 "read": true, 00:09:32.880 "write": true, 00:09:32.880 "unmap": true, 00:09:32.880 "flush": false, 00:09:32.880 "reset": true, 00:09:32.880 "nvme_admin": false, 00:09:32.880 "nvme_io": false, 00:09:32.880 "nvme_io_md": false, 00:09:32.880 "write_zeroes": true, 00:09:32.880 "zcopy": false, 00:09:32.880 "get_zone_info": false, 00:09:32.880 "zone_management": false, 00:09:32.880 "zone_append": false, 00:09:32.880 "compare": false, 00:09:32.880 "compare_and_write": false, 00:09:32.880 "abort": false, 00:09:32.880 "seek_hole": true, 00:09:32.880 "seek_data": true, 00:09:32.880 "copy": false, 00:09:32.880 "nvme_iov_md": false 00:09:32.880 }, 00:09:32.880 "driver_specific": { 00:09:32.880 "lvol": { 00:09:32.880 "lvol_store_uuid": "db525151-0d7c-4c0e-8dae-9a1b0dbdad83", 00:09:32.880 "base_bdev": "aio_bdev", 00:09:32.880 "thin_provision": false, 00:09:32.880 "num_allocated_clusters": 38, 00:09:32.880 "snapshot": false, 00:09:32.880 "clone": false, 00:09:32.880 "esnap_clone": false 00:09:32.880 } 00:09:32.880 } 00:09:32.880 } 00:09:32.880 ] 00:09:32.880 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:32.880 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db525151-0d7c-4c0e-8dae-9a1b0dbdad83 00:09:32.880 06:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:33.138 06:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:33.138 06:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db525151-0d7c-4c0e-8dae-9a1b0dbdad83 00:09:33.138 06:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:33.396 06:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:33.396 06:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f4ab4108-1525-447c-9314-71237bd2da85 00:09:33.396 06:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u db525151-0d7c-4c0e-8dae-9a1b0dbdad83 00:09:33.964 06:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:33.964 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:33.964 00:09:33.964 real 0m21.232s 00:09:33.964 user 0m53.810s 00:09:33.964 sys 0m4.675s 00:09:33.964 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.964 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:33.964 ************************************ 00:09:33.964 END TEST lvs_grow_dirty 00:09:33.964 ************************************ 00:09:33.964 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:33.964 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:33.964 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:33.964 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:33.964 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:33.964 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:33.964 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:33.964 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:33.964 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:34.224 nvmf_trace.0 00:09:34.224 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:34.224 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:34.224 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:34.224 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:34.224 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:34.224 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:34.224 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:34.224 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:34.224 rmmod nvme_tcp 00:09:34.224 rmmod nvme_fabrics 00:09:34.224 rmmod nvme_keyring 00:09:34.224 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:34.224 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:34.224 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:34.224 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 50610 ']' 00:09:34.224 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 50610 00:09:34.224 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 50610 ']' 00:09:34.224 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 50610 00:09:34.224 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:34.224 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:34.224 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 50610 00:09:34.224 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:34.224 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:34.224 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 50610' 00:09:34.224 killing process with pid 50610 00:09:34.224 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 50610 00:09:34.224 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 50610 00:09:35.602 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:09:35.602 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:35.602 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:35.602 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:35.602 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:35.602 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:35.602 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.602 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.602 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.507 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:37.507 00:09:37.507 real 0m47.090s 00:09:37.507 user 1m19.750s 00:09:37.507 sys 0m8.619s 00:09:37.507 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:37.507 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:37.507 ************************************ 00:09:37.507 END TEST nvmf_lvs_grow 00:09:37.507 ************************************ 00:09:37.507 06:01:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:37.507 06:01:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:37.507 06:01:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.507 06:01:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:37.507 ************************************ 00:09:37.507 START TEST nvmf_bdev_io_wait 00:09:37.507 ************************************ 00:09:37.507 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:37.766 * Looking for test storage... 00:09:37.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:37.766 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.767 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:37.767 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:37.767 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:37.767 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.767 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.767 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.767 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:37.767 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:37.767 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:37.767 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.685 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:39.686 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:39.686 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:39.686 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:39.686 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:39.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:39.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:09:39.686 00:09:39.686 --- 10.0.0.2 ping statistics --- 00:09:39.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.686 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:09:39.686 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:39.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:39.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:09:39.686 00:09:39.686 --- 10.0.0.1 ping statistics --- 00:09:39.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.686 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:09:39.686 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:39.686 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:09:39.686 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:39.686 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:39.686 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:39.686 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:39.686 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:39.686 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:39.686 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:39.945 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:39.945 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:39.945 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:39.945 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.945 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=53337 00:09:39.945 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:39.945 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 53337 00:09:39.945 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 53337 ']' 00:09:39.945 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.945 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:39.945 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.945 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:39.945 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.945 [2024-07-26 06:01:51.114998] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:39.945 [2024-07-26 06:01:51.115186] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.945 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.945 [2024-07-26 06:01:51.258176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:40.205 [2024-07-26 06:01:51.520355] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.205 [2024-07-26 06:01:51.520442] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.205 [2024-07-26 06:01:51.520470] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.205 [2024-07-26 06:01:51.520492] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.205 [2024-07-26 06:01:51.520514] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.205 [2024-07-26 06:01:51.520641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.205 [2024-07-26 06:01:51.520713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.205 [2024-07-26 06:01:51.520997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.205 [2024-07-26 06:01:51.521031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.771 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:40.771 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:40.771 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:40.771 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:40.771 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.771 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:40.771 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:40.771 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.771 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.771 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.771 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:41.029 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.029 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.029 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.029 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:41.029 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.029 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.029 [2024-07-26 06:01:52.333013] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.029 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.029 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:41.029 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.029 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.288 Malloc0 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.288 [2024-07-26 06:01:52.446026] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=53502 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=53503 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=53506 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:41.288 { 00:09:41.288 "params": { 00:09:41.288 "name": "Nvme$subsystem", 00:09:41.288 "trtype": "$TEST_TRANSPORT", 00:09:41.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.288 "adrfam": "ipv4", 00:09:41.288 "trsvcid": "$NVMF_PORT", 00:09:41.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.288 "hdgst": ${hdgst:-false}, 00:09:41.288 "ddgst": ${ddgst:-false} 00:09:41.288 }, 00:09:41.288 "method": "bdev_nvme_attach_controller" 00:09:41.288 } 00:09:41.288 EOF 00:09:41.288 )") 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:41.288 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:41.288 { 00:09:41.288 "params": { 00:09:41.288 "name": "Nvme$subsystem", 00:09:41.288 "trtype": "$TEST_TRANSPORT", 00:09:41.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.289 "adrfam": "ipv4", 00:09:41.289 "trsvcid": "$NVMF_PORT", 00:09:41.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.289 "hdgst": ${hdgst:-false}, 00:09:41.289 "ddgst": ${ddgst:-false} 00:09:41.289 }, 00:09:41.289 "method": "bdev_nvme_attach_controller" 00:09:41.289 } 00:09:41.289 EOF 00:09:41.289 )") 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=53508 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:41.289 { 00:09:41.289 "params": { 00:09:41.289 "name": "Nvme$subsystem", 00:09:41.289 "trtype": "$TEST_TRANSPORT", 00:09:41.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.289 "adrfam": "ipv4", 00:09:41.289 "trsvcid": "$NVMF_PORT", 00:09:41.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.289 "hdgst": ${hdgst:-false}, 00:09:41.289 "ddgst": ${ddgst:-false} 00:09:41.289 }, 00:09:41.289 "method": "bdev_nvme_attach_controller" 00:09:41.289 } 00:09:41.289 EOF 00:09:41.289 )") 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:41.289 { 00:09:41.289 "params": { 00:09:41.289 "name": "Nvme$subsystem", 00:09:41.289 "trtype": "$TEST_TRANSPORT", 00:09:41.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.289 "adrfam": "ipv4", 00:09:41.289 "trsvcid": "$NVMF_PORT", 00:09:41.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.289 "hdgst": ${hdgst:-false}, 00:09:41.289 "ddgst": ${ddgst:-false} 00:09:41.289 }, 00:09:41.289 "method": "bdev_nvme_attach_controller" 00:09:41.289 } 00:09:41.289 EOF 00:09:41.289 )") 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 53502 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:41.289 "params": { 00:09:41.289 "name": "Nvme1", 00:09:41.289 "trtype": "tcp", 00:09:41.289 "traddr": "10.0.0.2", 00:09:41.289 "adrfam": "ipv4", 00:09:41.289 "trsvcid": "4420", 00:09:41.289 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.289 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.289 "hdgst": false, 00:09:41.289 "ddgst": false 00:09:41.289 }, 00:09:41.289 "method": "bdev_nvme_attach_controller" 00:09:41.289 }' 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:41.289 "params": { 00:09:41.289 "name": "Nvme1", 00:09:41.289 "trtype": "tcp", 00:09:41.289 "traddr": "10.0.0.2", 00:09:41.289 "adrfam": "ipv4", 00:09:41.289 "trsvcid": "4420", 00:09:41.289 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.289 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.289 "hdgst": false, 00:09:41.289 "ddgst": false 00:09:41.289 }, 00:09:41.289 "method": "bdev_nvme_attach_controller" 00:09:41.289 }' 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:41.289 "params": { 00:09:41.289 "name": "Nvme1", 00:09:41.289 "trtype": "tcp", 00:09:41.289 "traddr": "10.0.0.2", 00:09:41.289 "adrfam": "ipv4", 00:09:41.289 "trsvcid": "4420", 00:09:41.289 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.289 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.289 "hdgst": false, 00:09:41.289 "ddgst": false 00:09:41.289 }, 00:09:41.289 "method": "bdev_nvme_attach_controller" 00:09:41.289 }' 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:41.289 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:41.289 "params": { 00:09:41.289 "name": "Nvme1", 00:09:41.289 "trtype": "tcp", 00:09:41.289 "traddr": "10.0.0.2", 00:09:41.289 "adrfam": "ipv4", 00:09:41.289 "trsvcid": "4420", 00:09:41.289 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.289 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.289 "hdgst": false, 00:09:41.289 "ddgst": false 00:09:41.289 }, 00:09:41.289 "method": "bdev_nvme_attach_controller" 00:09:41.289 }' 00:09:41.289 [2024-07-26 06:01:52.531191] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:41.289 [2024-07-26 06:01:52.531191] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:41.289 [2024-07-26 06:01:52.531191] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:41.289 [2024-07-26 06:01:52.531356] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-26 06:01:52.531355] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 [2024-07-26 06:01:52.531353] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:41.289 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:41.289 --proc-type=auto ] 00:09:41.289 [2024-07-26 06:01:52.533247] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:41.289 [2024-07-26 06:01:52.533416] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:41.289 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.548 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.548 [2024-07-26 06:01:52.775711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.548 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.548 [2024-07-26 06:01:52.877867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.808 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.808 [2024-07-26 06:01:53.007865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:41.808 [2024-07-26 06:01:53.023293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.808 [2024-07-26 06:01:53.072677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.808 [2024-07-26 06:01:53.108615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:42.066 [2024-07-26 06:01:53.254539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:42.066 [2024-07-26 06:01:53.294881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:42.323 Running I/O for 1 seconds... 00:09:42.323 Running I/O for 1 seconds... 00:09:42.579 Running I/O for 1 seconds... 00:09:42.579 Running I/O for 1 seconds... 00:09:43.517 00:09:43.518 Latency(us) 00:09:43.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.518 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:43.518 Nvme1n1 : 1.04 4894.88 19.12 0.00 0.00 25579.10 3786.52 45438.29 00:09:43.518 =================================================================================================================== 00:09:43.518 Total : 4894.88 19.12 0.00 0.00 25579.10 3786.52 45438.29 00:09:43.518 00:09:43.518 Latency(us) 00:09:43.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.518 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:43.518 Nvme1n1 : 1.01 7127.24 27.84 0.00 0.00 17842.88 6941.96 25243.50 00:09:43.518 =================================================================================================================== 00:09:43.518 Total : 7127.24 27.84 0.00 0.00 17842.88 6941.96 25243.50 00:09:43.518 00:09:43.518 Latency(us) 00:09:43.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.518 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:43.518 Nvme1n1 : 1.00 146892.58 573.80 0.00 0.00 868.20 362.57 1098.33 00:09:43.518 =================================================================================================================== 00:09:43.518 Total : 146892.58 573.80 0.00 0.00 868.20 362.57 1098.33 00:09:43.775 00:09:43.775 Latency(us) 00:09:43.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.775 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:43.775 Nvme1n1 : 1.01 5665.02 22.13 0.00 0.00 22502.07 6650.69 70681.79 00:09:43.775 =================================================================================================================== 00:09:43.775 Total : 5665.02 22.13 0.00 0.00 22502.07 6650.69 70681.79 00:09:43.775 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:09:43.775 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:09:44.035 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:09:44.294 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:09:44.294 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 53503 00:09:44.294 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 53506 00:09:44.553 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 53508 00:09:44.553 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.553 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.553 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.553 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.553 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:44.553 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:44.553 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:44.553 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:44.553 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:44.553 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:44.553 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:44.553 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:44.553 rmmod nvme_tcp 00:09:44.815 rmmod nvme_fabrics 00:09:44.815 rmmod nvme_keyring 00:09:44.815 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:44.815 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:44.815 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:44.815 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 53337 ']' 00:09:44.815 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 53337 00:09:44.815 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 53337 ']' 00:09:44.815 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 53337 00:09:44.815 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:44.815 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:44.815 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 53337 00:09:44.815 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:44.815 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:44.815 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 53337' 00:09:44.815 killing process with pid 53337 00:09:44.815 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 53337 00:09:44.815 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 53337 00:09:45.792 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:09:46.052 06:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:46.052 06:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:46.052 06:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:46.052 06:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:46.052 06:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:46.052 06:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.052 06:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.052 06:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.957 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:47.957 00:09:47.957 real 0m10.419s 00:09:47.957 user 0m31.582s 00:09:47.957 sys 0m4.223s 00:09:47.957 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.957 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:47.957 ************************************ 00:09:47.957 END TEST nvmf_bdev_io_wait 00:09:47.957 ************************************ 00:09:47.957 06:01:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:47.957 06:01:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:47.957 06:01:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.957 06:01:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:47.957 ************************************ 00:09:47.957 START TEST nvmf_queue_depth 00:09:47.957 ************************************ 00:09:47.957 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:48.215 * Looking for test storage... 00:09:48.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.215 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:48.215 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:48.215 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:48.215 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:48.215 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:48.215 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:09:48.216 06:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:50.123 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:50.123 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:50.123 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.123 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:50.124 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:50.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:09:50.124 00:09:50.124 --- 10.0.0.2 ping statistics --- 00:09:50.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.124 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:50.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:09:50.124 00:09:50.124 --- 10.0.0.1 ping statistics --- 00:09:50.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.124 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:50.124 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:50.384 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:50.384 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:50.384 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:50.384 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:50.384 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=55999 00:09:50.384 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:50.384 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 55999 00:09:50.384 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 55999 ']' 00:09:50.384 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.384 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:50.384 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.384 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:50.384 06:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:50.384 [2024-07-26 06:02:01.549723] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:50.384 [2024-07-26 06:02:01.549871] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.384 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.384 [2024-07-26 06:02:01.689486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.642 [2024-07-26 06:02:01.945520] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.642 [2024-07-26 06:02:01.945583] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.642 [2024-07-26 06:02:01.945607] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.642 [2024-07-26 06:02:01.945627] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.642 [2024-07-26 06:02:01.945657] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.642 [2024-07-26 06:02:01.945723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.225 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:51.225 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:51.225 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:51.225 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:51.225 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:51.225 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.225 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:51.225 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.225 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:51.483 [2024-07-26 06:02:02.561251] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.483 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.483 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:51.483 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.483 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:51.483 Malloc0 00:09:51.483 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.483 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:51.483 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.483 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:51.483 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.483 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:51.483 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.483 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:51.483 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.483 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:51.483 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.483 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:51.483 [2024-07-26 06:02:02.691145] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:51.483 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.483 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=56155 00:09:51.483 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:51.483 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 56155 /var/tmp/bdevperf.sock 00:09:51.484 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:51.484 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 56155 ']' 00:09:51.484 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:51.484 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:51.484 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:51.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:51.484 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:51.484 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:51.484 [2024-07-26 06:02:02.773891] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:51.484 [2024-07-26 06:02:02.774040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56155 ] 00:09:51.741 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.741 [2024-07-26 06:02:02.917284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.000 [2024-07-26 06:02:03.187041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.564 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:52.564 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:52.564 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:52.564 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.564 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.564 NVMe0n1 00:09:52.564 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.564 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:52.823 Running I/O for 10 seconds... 00:10:02.806 00:10:02.806 Latency(us) 00:10:02.806 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.806 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:02.806 Verification LBA range: start 0x0 length 0x4000 00:10:02.806 NVMe0n1 : 10.10 6314.62 24.67 0.00 0.00 161195.67 23398.78 103304.15 00:10:02.806 =================================================================================================================== 00:10:02.806 Total : 6314.62 24.67 0.00 0.00 161195.67 23398.78 103304.15 00:10:02.806 0 00:10:02.806 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 56155 00:10:02.806 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 56155 ']' 00:10:02.806 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 56155 00:10:02.806 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:02.806 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:02.806 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56155 00:10:02.806 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:02.806 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:03.066 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56155' 00:10:03.066 killing process with pid 56155 00:10:03.066 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 56155 00:10:03.066 Received shutdown signal, test time was about 10.000000 seconds 00:10:03.066 00:10:03.066 Latency(us) 00:10:03.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.066 =================================================================================================================== 00:10:03.066 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:03.066 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 56155 00:10:04.006 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:10:04.006 06:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:04.006 06:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:04.006 06:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:04.006 06:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:10:04.006 06:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:04.006 06:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:10:04.006 06:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:04.006 06:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:04.006 rmmod nvme_tcp 00:10:04.006 rmmod nvme_fabrics 00:10:04.006 rmmod nvme_keyring 00:10:04.006 06:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:04.006 06:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:10:04.006 06:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:10:04.006 06:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 55999 ']' 00:10:04.006 06:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 55999 00:10:04.006 06:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 55999 ']' 00:10:04.006 06:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 55999 00:10:04.006 06:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:04.006 06:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:04.006 06:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 55999 00:10:04.006 06:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:04.006 06:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:04.006 06:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 55999' 00:10:04.006 killing process with pid 55999 00:10:04.006 06:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 55999 00:10:04.006 06:02:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 55999 00:10:05.382 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:10:05.643 06:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:05.643 06:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:05.643 06:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:05.643 06:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:05.643 06:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:05.643 06:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.643 06:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.643 06:02:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.581 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:07.581 00:10:07.581 real 0m19.557s 00:10:07.581 user 0m27.959s 00:10:07.581 sys 0m3.196s 00:10:07.581 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.581 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:07.581 ************************************ 00:10:07.581 END TEST nvmf_queue_depth 00:10:07.581 ************************************ 00:10:07.581 06:02:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:07.581 06:02:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:07.581 06:02:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.581 06:02:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:07.581 ************************************ 00:10:07.581 START TEST nvmf_target_multipath 00:10:07.581 ************************************ 00:10:07.581 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:07.840 * Looking for test storage... 00:10:07.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:07.840 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:07.841 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:07.841 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:07.841 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:07.841 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:07.841 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:07.841 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:07.841 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.841 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:07.841 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:07.841 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:07.841 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.841 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.841 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.841 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:07.841 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:07.841 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:10:07.841 06:02:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:09.745 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:09.745 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:09.745 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:09.745 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:09.745 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:09.746 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:09.746 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:09.746 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:09.746 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:09.746 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:09.746 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:09.746 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:09.746 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:09.746 06:02:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:09.746 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:09.746 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:09.746 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:09.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:09.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:10:09.746 00:10:09.746 --- 10.0.0.2 ping statistics --- 00:10:09.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.746 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:10:09.746 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:09.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:09.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:10:09.746 00:10:09.746 --- 10.0.0.1 ping statistics --- 00:10:09.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.746 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:10:09.746 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:09.746 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:10:09.746 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:09.746 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:09.746 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:09.746 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:09.746 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:09.746 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:09.746 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:09.746 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:09.746 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:09.746 only one NIC for nvmf test 00:10:09.746 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:09.746 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:09.746 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:09.746 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:09.746 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:09.746 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:09.746 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:09.746 rmmod nvme_tcp 00:10:10.004 rmmod nvme_fabrics 00:10:10.004 rmmod nvme_keyring 00:10:10.004 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:10.004 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:10.004 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:10.004 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:10.004 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:10.004 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:10.004 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:10.004 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:10.004 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:10.004 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.004 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.004 06:02:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:11.911 00:10:11.911 real 0m4.313s 00:10:11.911 user 0m0.790s 00:10:11.911 sys 0m1.497s 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:11.911 ************************************ 00:10:11.911 END TEST nvmf_target_multipath 00:10:11.911 ************************************ 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:11.911 ************************************ 00:10:11.911 START TEST nvmf_zcopy 00:10:11.911 ************************************ 00:10:11.911 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:12.185 * Looking for test storage... 00:10:12.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:12.185 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:12.185 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:12.185 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.185 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.185 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.185 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.185 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.185 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.185 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.185 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.185 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.185 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.185 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:12.185 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:12.185 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.185 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.185 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:12.185 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.185 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:12.185 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.185 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.185 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.185 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.186 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.186 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.186 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:12.186 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.186 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:10:12.186 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:12.186 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:12.186 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.186 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.186 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.186 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:12.186 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:12.186 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:12.186 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:12.186 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:12.186 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.186 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:12.186 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:12.186 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:12.186 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.186 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.186 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.186 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:12.186 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:12.186 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:10:12.186 06:02:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:14.099 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:14.099 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:14.099 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:14.099 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:14.099 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:14.100 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:14.100 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.100 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:14.100 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:14.100 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:14.100 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:14.100 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:14.100 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:14.100 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:14.100 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:14.100 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:14.100 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:14.100 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:14.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:10:14.100 00:10:14.100 --- 10.0.0.2 ping statistics --- 00:10:14.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.100 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:10:14.100 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:14.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:10:14.100 00:10:14.100 --- 10.0.0.1 ping statistics --- 00:10:14.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.100 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:10:14.100 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.100 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:10:14.100 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:14.100 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.100 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:14.100 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:14.100 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.100 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:14.100 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:14.359 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:14.359 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:14.359 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:14.359 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.359 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=61636 00:10:14.359 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:14.359 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 61636 00:10:14.359 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 61636 ']' 00:10:14.359 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.359 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:14.359 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.359 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:14.359 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.359 [2024-07-26 06:02:25.544895] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:14.359 [2024-07-26 06:02:25.545056] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.359 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.359 [2024-07-26 06:02:25.687633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.618 [2024-07-26 06:02:25.943513] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.618 [2024-07-26 06:02:25.943599] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.618 [2024-07-26 06:02:25.943629] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.618 [2024-07-26 06:02:25.943656] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.618 [2024-07-26 06:02:25.943679] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.618 [2024-07-26 06:02:25.943737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.183 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:15.183 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:15.183 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:15.183 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:15.183 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.441 [2024-07-26 06:02:26.540096] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.441 [2024-07-26 06:02:26.556326] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.441 malloc0 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:15.441 { 00:10:15.441 "params": { 00:10:15.441 "name": "Nvme$subsystem", 00:10:15.441 "trtype": "$TEST_TRANSPORT", 00:10:15.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:15.441 "adrfam": "ipv4", 00:10:15.441 "trsvcid": "$NVMF_PORT", 00:10:15.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:15.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:15.441 "hdgst": ${hdgst:-false}, 00:10:15.441 "ddgst": ${ddgst:-false} 00:10:15.441 }, 00:10:15.441 "method": "bdev_nvme_attach_controller" 00:10:15.441 } 00:10:15.441 EOF 00:10:15.441 )") 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:15.441 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:15.441 "params": { 00:10:15.441 "name": "Nvme1", 00:10:15.441 "trtype": "tcp", 00:10:15.441 "traddr": "10.0.0.2", 00:10:15.441 "adrfam": "ipv4", 00:10:15.441 "trsvcid": "4420", 00:10:15.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:15.441 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:15.441 "hdgst": false, 00:10:15.441 "ddgst": false 00:10:15.441 }, 00:10:15.441 "method": "bdev_nvme_attach_controller" 00:10:15.441 }' 00:10:15.441 [2024-07-26 06:02:26.721423] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:15.441 [2024-07-26 06:02:26.721583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61789 ] 00:10:15.698 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.698 [2024-07-26 06:02:26.859524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.957 [2024-07-26 06:02:27.121377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.526 Running I/O for 10 seconds... 00:10:26.509 00:10:26.509 Latency(us) 00:10:26.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:26.509 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:26.509 Verification LBA range: start 0x0 length 0x1000 00:10:26.509 Nvme1n1 : 10.02 4454.14 34.80 0.00 0.00 28660.09 3640.89 39612.87 00:10:26.509 =================================================================================================================== 00:10:26.509 Total : 4454.14 34.80 0.00 0.00 28660.09 3640.89 39612.87 00:10:27.447 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:10:27.447 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=63243 00:10:27.447 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:27.447 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.447 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:27.447 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:27.447 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:27.447 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:27.447 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:27.447 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:27.447 { 00:10:27.447 "params": { 00:10:27.447 "name": "Nvme$subsystem", 00:10:27.447 "trtype": "$TEST_TRANSPORT", 00:10:27.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:27.447 "adrfam": "ipv4", 00:10:27.447 "trsvcid": "$NVMF_PORT", 00:10:27.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:27.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:27.447 "hdgst": ${hdgst:-false}, 00:10:27.447 "ddgst": ${ddgst:-false} 00:10:27.447 }, 00:10:27.447 "method": "bdev_nvme_attach_controller" 00:10:27.447 } 00:10:27.447 EOF 00:10:27.447 )") 00:10:27.447 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:27.447 [2024-07-26 06:02:38.665567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.447 [2024-07-26 06:02:38.665633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.447 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:27.447 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:27.447 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:27.447 "params": { 00:10:27.447 "name": "Nvme1", 00:10:27.447 "trtype": "tcp", 00:10:27.447 "traddr": "10.0.0.2", 00:10:27.447 "adrfam": "ipv4", 00:10:27.447 "trsvcid": "4420", 00:10:27.447 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:27.447 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:27.447 "hdgst": false, 00:10:27.447 "ddgst": false 00:10:27.447 }, 00:10:27.447 "method": "bdev_nvme_attach_controller" 00:10:27.447 }' 00:10:27.447 [2024-07-26 06:02:38.673429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.447 [2024-07-26 06:02:38.673465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.447 [2024-07-26 06:02:38.681447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.447 [2024-07-26 06:02:38.681479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.447 [2024-07-26 06:02:38.689474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.447 [2024-07-26 06:02:38.689508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.447 [2024-07-26 06:02:38.697512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.447 [2024-07-26 06:02:38.697548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.447 [2024-07-26 06:02:38.705544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.447 [2024-07-26 06:02:38.705574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.447 [2024-07-26 06:02:38.713550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.447 [2024-07-26 06:02:38.713577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.447 [2024-07-26 06:02:38.721538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.447 [2024-07-26 06:02:38.721566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.447 [2024-07-26 06:02:38.729593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.447 [2024-07-26 06:02:38.729621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.447 [2024-07-26 06:02:38.737589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.447 [2024-07-26 06:02:38.737617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.447 [2024-07-26 06:02:38.739426] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:27.447 [2024-07-26 06:02:38.739542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63243 ] 00:10:27.447 [2024-07-26 06:02:38.745624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.447 [2024-07-26 06:02:38.745652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.447 [2024-07-26 06:02:38.753638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.447 [2024-07-26 06:02:38.753665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.447 [2024-07-26 06:02:38.761652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.447 [2024-07-26 06:02:38.761680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.448 [2024-07-26 06:02:38.769695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.448 [2024-07-26 06:02:38.769725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.448 [2024-07-26 06:02:38.777707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.448 [2024-07-26 06:02:38.777735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.708 [2024-07-26 06:02:38.785712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.708 [2024-07-26 06:02:38.785740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.708 [2024-07-26 06:02:38.793758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.708 [2024-07-26 06:02:38.793787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.708 [2024-07-26 06:02:38.801759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.708 [2024-07-26 06:02:38.801786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.708 [2024-07-26 06:02:38.809798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.708 [2024-07-26 06:02:38.809825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.708 EAL: No free 2048 kB hugepages reported on node 1 00:10:27.708 [2024-07-26 06:02:38.817827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.708 [2024-07-26 06:02:38.817855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.708 [2024-07-26 06:02:38.825885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.708 [2024-07-26 06:02:38.825920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.708 [2024-07-26 06:02:38.833904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.708 [2024-07-26 06:02:38.833937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.708 [2024-07-26 06:02:38.841933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.708 [2024-07-26 06:02:38.841967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.708 [2024-07-26 06:02:38.849931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.708 [2024-07-26 06:02:38.849963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.708 [2024-07-26 06:02:38.857983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.708 [2024-07-26 06:02:38.858016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.708 [2024-07-26 06:02:38.865979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.708 [2024-07-26 06:02:38.866013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.708 [2024-07-26 06:02:38.874017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.708 [2024-07-26 06:02:38.874051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.708 [2024-07-26 06:02:38.874859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.708 [2024-07-26 06:02:38.882066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.708 [2024-07-26 06:02:38.882118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.708 [2024-07-26 06:02:38.890173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.708 [2024-07-26 06:02:38.890224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.708 [2024-07-26 06:02:38.898123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.708 [2024-07-26 06:02:38.898153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.709 [2024-07-26 06:02:38.906129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.709 [2024-07-26 06:02:38.906158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.709 [2024-07-26 06:02:38.914131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.709 [2024-07-26 06:02:38.914160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.709 [2024-07-26 06:02:38.922188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.709 [2024-07-26 06:02:38.922217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.709 [2024-07-26 06:02:38.930178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.709 [2024-07-26 06:02:38.930210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.709 [2024-07-26 06:02:38.938210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.709 [2024-07-26 06:02:38.938240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.709 [2024-07-26 06:02:38.946228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.709 [2024-07-26 06:02:38.946258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.709 [2024-07-26 06:02:38.954227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.709 [2024-07-26 06:02:38.954256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.709 [2024-07-26 06:02:38.962254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.709 [2024-07-26 06:02:38.962290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.709 [2024-07-26 06:02:38.970276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.709 [2024-07-26 06:02:38.970305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.709 [2024-07-26 06:02:38.978284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.709 [2024-07-26 06:02:38.978314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.709 [2024-07-26 06:02:38.986316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.709 [2024-07-26 06:02:38.986362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.709 [2024-07-26 06:02:38.994323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.709 [2024-07-26 06:02:38.994369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.709 [2024-07-26 06:02:39.002376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.709 [2024-07-26 06:02:39.002421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.709 [2024-07-26 06:02:39.010434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.709 [2024-07-26 06:02:39.010488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.709 [2024-07-26 06:02:39.018520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.709 [2024-07-26 06:02:39.018577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.709 [2024-07-26 06:02:39.026459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.709 [2024-07-26 06:02:39.026495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.709 [2024-07-26 06:02:39.034486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.709 [2024-07-26 06:02:39.034519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.969 [2024-07-26 06:02:39.042502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.969 [2024-07-26 06:02:39.042537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.969 [2024-07-26 06:02:39.050544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.969 [2024-07-26 06:02:39.050579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.969 [2024-07-26 06:02:39.058546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.969 [2024-07-26 06:02:39.058580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.969 [2024-07-26 06:02:39.066586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.969 [2024-07-26 06:02:39.066620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.969 [2024-07-26 06:02:39.074612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.969 [2024-07-26 06:02:39.074645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.969 [2024-07-26 06:02:39.082613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.969 [2024-07-26 06:02:39.082648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.969 [2024-07-26 06:02:39.090634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.969 [2024-07-26 06:02:39.090663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.969 [2024-07-26 06:02:39.098651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.969 [2024-07-26 06:02:39.098680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.969 [2024-07-26 06:02:39.106654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.969 [2024-07-26 06:02:39.106681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.969 [2024-07-26 06:02:39.114715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.969 [2024-07-26 06:02:39.114744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.969 [2024-07-26 06:02:39.122711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.969 [2024-07-26 06:02:39.122739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.969 [2024-07-26 06:02:39.130739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.969 [2024-07-26 06:02:39.130766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.969 [2024-07-26 06:02:39.138497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.969 [2024-07-26 06:02:39.138761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.969 [2024-07-26 06:02:39.138789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.969 [2024-07-26 06:02:39.146766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.969 [2024-07-26 06:02:39.146794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.969 [2024-07-26 06:02:39.154894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.969 [2024-07-26 06:02:39.154938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.969 [2024-07-26 06:02:39.162938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.969 [2024-07-26 06:02:39.162991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.969 [2024-07-26 06:02:39.170843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.969 [2024-07-26 06:02:39.170872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.969 [2024-07-26 06:02:39.178874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.970 [2024-07-26 06:02:39.178902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.970 [2024-07-26 06:02:39.186896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.970 [2024-07-26 06:02:39.186925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.970 [2024-07-26 06:02:39.194917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.970 [2024-07-26 06:02:39.194946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.970 [2024-07-26 06:02:39.202953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.970 [2024-07-26 06:02:39.202981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.970 [2024-07-26 06:02:39.210968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.970 [2024-07-26 06:02:39.210997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.970 [2024-07-26 06:02:39.218995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.970 [2024-07-26 06:02:39.219026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.970 [2024-07-26 06:02:39.227142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.970 [2024-07-26 06:02:39.227196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.970 [2024-07-26 06:02:39.235145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.970 [2024-07-26 06:02:39.235200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.970 [2024-07-26 06:02:39.243208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.970 [2024-07-26 06:02:39.243265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.970 [2024-07-26 06:02:39.251177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.970 [2024-07-26 06:02:39.251234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.970 [2024-07-26 06:02:39.259152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.970 [2024-07-26 06:02:39.259192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.970 [2024-07-26 06:02:39.267145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.970 [2024-07-26 06:02:39.267176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.970 [2024-07-26 06:02:39.275155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.970 [2024-07-26 06:02:39.275185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.970 [2024-07-26 06:02:39.283203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.970 [2024-07-26 06:02:39.283233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.970 [2024-07-26 06:02:39.291199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.970 [2024-07-26 06:02:39.291228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.970 [2024-07-26 06:02:39.299217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.970 [2024-07-26 06:02:39.299247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.307273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.307303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.315262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.315292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.323302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.323331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.331324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.331368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.339333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.339377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.347388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.347415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.355427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.355455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.363440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.363473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.371586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.371638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.379542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.379596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.387493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.387524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.395511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.395539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.403567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.403595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.411606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.411642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.419613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.419642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.427608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.427635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.435633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.435662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.443646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.443676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.451686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.451715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.459737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.459772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.467746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.467780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.475783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.475817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.483807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.483842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.491818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.491856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.499879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.499917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.507863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.507899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.515913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.515952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.523952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.523989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.531934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.531971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.539974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.540009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.548040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.548084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.230 [2024-07-26 06:02:39.556018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.230 [2024-07-26 06:02:39.556056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.514 [2024-07-26 06:02:39.564068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.514 [2024-07-26 06:02:39.564128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.514 Running I/O for 5 seconds... 00:10:28.514 [2024-07-26 06:02:39.581270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.514 [2024-07-26 06:02:39.581310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.514 [2024-07-26 06:02:39.593889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.514 [2024-07-26 06:02:39.593944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.514 [2024-07-26 06:02:39.608779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.514 [2024-07-26 06:02:39.608830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.514 [2024-07-26 06:02:39.623656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.514 [2024-07-26 06:02:39.623709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.514 [2024-07-26 06:02:39.637363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.514 [2024-07-26 06:02:39.637400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.514 [2024-07-26 06:02:39.651906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.514 [2024-07-26 06:02:39.651947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.514 [2024-07-26 06:02:39.666449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.514 [2024-07-26 06:02:39.666503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.514 [2024-07-26 06:02:39.681219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.514 [2024-07-26 06:02:39.681256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.514 [2024-07-26 06:02:39.696197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.515 [2024-07-26 06:02:39.696248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.515 [2024-07-26 06:02:39.710591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.515 [2024-07-26 06:02:39.710632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.515 [2024-07-26 06:02:39.725248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.515 [2024-07-26 06:02:39.725285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.515 [2024-07-26 06:02:39.739703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.515 [2024-07-26 06:02:39.739756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.515 [2024-07-26 06:02:39.754456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.515 [2024-07-26 06:02:39.754496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.515 [2024-07-26 06:02:39.768856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.515 [2024-07-26 06:02:39.768892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.515 [2024-07-26 06:02:39.782869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.515 [2024-07-26 06:02:39.782906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.515 [2024-07-26 06:02:39.796595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.515 [2024-07-26 06:02:39.796632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.515 [2024-07-26 06:02:39.810872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.515 [2024-07-26 06:02:39.810908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.515 [2024-07-26 06:02:39.824806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.515 [2024-07-26 06:02:39.824845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.780 [2024-07-26 06:02:39.838852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-07-26 06:02:39.838893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.780 [2024-07-26 06:02:39.853433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-07-26 06:02:39.853469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.780 [2024-07-26 06:02:39.867871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-07-26 06:02:39.867911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.780 [2024-07-26 06:02:39.882810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-07-26 06:02:39.882851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.780 [2024-07-26 06:02:39.897783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-07-26 06:02:39.897820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.780 [2024-07-26 06:02:39.912295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-07-26 06:02:39.912331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.780 [2024-07-26 06:02:39.927013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-07-26 06:02:39.927079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.780 [2024-07-26 06:02:39.940908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-07-26 06:02:39.940964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.780 [2024-07-26 06:02:39.955327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-07-26 06:02:39.955383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.780 [2024-07-26 06:02:39.969672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-07-26 06:02:39.969723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.780 [2024-07-26 06:02:39.984499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-07-26 06:02:39.984555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.780 [2024-07-26 06:02:39.998333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-07-26 06:02:39.998388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.780 [2024-07-26 06:02:40.013375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-07-26 06:02:40.013435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.780 [2024-07-26 06:02:40.028430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-07-26 06:02:40.028471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.780 [2024-07-26 06:02:40.043317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-07-26 06:02:40.043377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.780 [2024-07-26 06:02:40.057668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-07-26 06:02:40.057705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.780 [2024-07-26 06:02:40.072924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-07-26 06:02:40.072979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.780 [2024-07-26 06:02:40.085148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-07-26 06:02:40.085184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.780 [2024-07-26 06:02:40.098111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-07-26 06:02:40.098165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.780 [2024-07-26 06:02:40.112802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.780 [2024-07-26 06:02:40.112858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.040 [2024-07-26 06:02:40.126748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.040 [2024-07-26 06:02:40.126802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.040 [2024-07-26 06:02:40.140422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.040 [2024-07-26 06:02:40.140463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.040 [2024-07-26 06:02:40.154713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.040 [2024-07-26 06:02:40.154749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.040 [2024-07-26 06:02:40.169266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.040 [2024-07-26 06:02:40.169301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.040 [2024-07-26 06:02:40.183917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.040 [2024-07-26 06:02:40.183967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.040 [2024-07-26 06:02:40.198458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.040 [2024-07-26 06:02:40.198495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.040 [2024-07-26 06:02:40.212978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.040 [2024-07-26 06:02:40.213019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.040 [2024-07-26 06:02:40.227314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.040 [2024-07-26 06:02:40.227366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.040 [2024-07-26 06:02:40.241936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.040 [2024-07-26 06:02:40.241976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.040 [2024-07-26 06:02:40.256331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.040 [2024-07-26 06:02:40.256368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.040 [2024-07-26 06:02:40.270672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.040 [2024-07-26 06:02:40.270710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.040 [2024-07-26 06:02:40.285196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.040 [2024-07-26 06:02:40.285236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.040 [2024-07-26 06:02:40.299975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.040 [2024-07-26 06:02:40.300029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.040 [2024-07-26 06:02:40.314318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.040 [2024-07-26 06:02:40.314373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.040 [2024-07-26 06:02:40.328311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.040 [2024-07-26 06:02:40.328364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.040 [2024-07-26 06:02:40.342740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.040 [2024-07-26 06:02:40.342777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.040 [2024-07-26 06:02:40.356783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.040 [2024-07-26 06:02:40.356819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.040 [2024-07-26 06:02:40.370736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.040 [2024-07-26 06:02:40.370776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-07-26 06:02:40.385223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-07-26 06:02:40.385259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-07-26 06:02:40.399081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-07-26 06:02:40.399146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-07-26 06:02:40.414084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-07-26 06:02:40.414123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-07-26 06:02:40.429068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-07-26 06:02:40.429122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-07-26 06:02:40.444142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-07-26 06:02:40.444180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-07-26 06:02:40.458887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-07-26 06:02:40.458924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-07-26 06:02:40.473894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-07-26 06:02:40.473951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-07-26 06:02:40.489077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-07-26 06:02:40.489131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-07-26 06:02:40.504311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-07-26 06:02:40.504348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-07-26 06:02:40.518952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-07-26 06:02:40.518993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-07-26 06:02:40.534284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-07-26 06:02:40.534324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-07-26 06:02:40.548327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-07-26 06:02:40.548381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.298 [2024-07-26 06:02:40.563153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.298 [2024-07-26 06:02:40.563190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.299 [2024-07-26 06:02:40.576997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.299 [2024-07-26 06:02:40.577052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.299 [2024-07-26 06:02:40.591571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.299 [2024-07-26 06:02:40.591625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.299 [2024-07-26 06:02:40.606096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.299 [2024-07-26 06:02:40.606132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.299 [2024-07-26 06:02:40.620636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.299 [2024-07-26 06:02:40.620677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.559 [2024-07-26 06:02:40.634796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.559 [2024-07-26 06:02:40.634848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.559 [2024-07-26 06:02:40.648850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.559 [2024-07-26 06:02:40.648912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.559 [2024-07-26 06:02:40.663154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.559 [2024-07-26 06:02:40.663190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.559 [2024-07-26 06:02:40.677228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.559 [2024-07-26 06:02:40.677264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.559 [2024-07-26 06:02:40.691834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.559 [2024-07-26 06:02:40.691876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.559 [2024-07-26 06:02:40.705837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.559 [2024-07-26 06:02:40.705872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.559 [2024-07-26 06:02:40.720477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.559 [2024-07-26 06:02:40.720513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.559 [2024-07-26 06:02:40.735260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.559 [2024-07-26 06:02:40.735296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.559 [2024-07-26 06:02:40.749660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.559 [2024-07-26 06:02:40.749696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.559 [2024-07-26 06:02:40.764083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.559 [2024-07-26 06:02:40.764138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.559 [2024-07-26 06:02:40.778444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.559 [2024-07-26 06:02:40.778493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.559 [2024-07-26 06:02:40.793019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.559 [2024-07-26 06:02:40.793078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.559 [2024-07-26 06:02:40.807299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.559 [2024-07-26 06:02:40.807336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.559 [2024-07-26 06:02:40.821732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.559 [2024-07-26 06:02:40.821768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.559 [2024-07-26 06:02:40.836632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.559 [2024-07-26 06:02:40.836671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.559 [2024-07-26 06:02:40.851360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.559 [2024-07-26 06:02:40.851396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.559 [2024-07-26 06:02:40.865751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.559 [2024-07-26 06:02:40.865786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.559 [2024-07-26 06:02:40.880238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.559 [2024-07-26 06:02:40.880275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.819 [2024-07-26 06:02:40.894634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.819 [2024-07-26 06:02:40.894675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.819 [2024-07-26 06:02:40.909202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.819 [2024-07-26 06:02:40.909238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.819 [2024-07-26 06:02:40.923527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.819 [2024-07-26 06:02:40.923573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.819 [2024-07-26 06:02:40.937349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.819 [2024-07-26 06:02:40.937386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.819 [2024-07-26 06:02:40.951504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.819 [2024-07-26 06:02:40.951540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.819 [2024-07-26 06:02:40.965437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.819 [2024-07-26 06:02:40.965473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.819 [2024-07-26 06:02:40.979555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.819 [2024-07-26 06:02:40.979591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.819 [2024-07-26 06:02:40.993736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.819 [2024-07-26 06:02:40.993773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.819 [2024-07-26 06:02:41.007880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.819 [2024-07-26 06:02:41.007916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.819 [2024-07-26 06:02:41.022070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.819 [2024-07-26 06:02:41.022124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.819 [2024-07-26 06:02:41.036490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.819 [2024-07-26 06:02:41.036526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.819 [2024-07-26 06:02:41.051142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.819 [2024-07-26 06:02:41.051197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.819 [2024-07-26 06:02:41.066021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.819 [2024-07-26 06:02:41.066084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.819 [2024-07-26 06:02:41.080804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.819 [2024-07-26 06:02:41.080840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.819 [2024-07-26 06:02:41.094440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.819 [2024-07-26 06:02:41.094476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.819 [2024-07-26 06:02:41.108523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.819 [2024-07-26 06:02:41.108560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.819 [2024-07-26 06:02:41.123371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.819 [2024-07-26 06:02:41.123407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.819 [2024-07-26 06:02:41.137723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.819 [2024-07-26 06:02:41.137760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.079 [2024-07-26 06:02:41.152676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.079 [2024-07-26 06:02:41.152717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.079 [2024-07-26 06:02:41.167185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.079 [2024-07-26 06:02:41.167220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.079 [2024-07-26 06:02:41.181950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.079 [2024-07-26 06:02:41.181991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.079 [2024-07-26 06:02:41.195687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.079 [2024-07-26 06:02:41.195738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.079 [2024-07-26 06:02:41.209442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.079 [2024-07-26 06:02:41.209498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.079 [2024-07-26 06:02:41.223205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.079 [2024-07-26 06:02:41.223245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.079 [2024-07-26 06:02:41.237586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.079 [2024-07-26 06:02:41.237622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.079 [2024-07-26 06:02:41.252384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.079 [2024-07-26 06:02:41.252421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.079 [2024-07-26 06:02:41.266567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.079 [2024-07-26 06:02:41.266607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.079 [2024-07-26 06:02:41.280713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.079 [2024-07-26 06:02:41.280749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.079 [2024-07-26 06:02:41.295774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.079 [2024-07-26 06:02:41.295813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.079 [2024-07-26 06:02:41.310639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.079 [2024-07-26 06:02:41.310675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.079 [2024-07-26 06:02:41.325023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.079 [2024-07-26 06:02:41.325070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.079 [2024-07-26 06:02:41.339320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.079 [2024-07-26 06:02:41.339361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.079 [2024-07-26 06:02:41.353936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.079 [2024-07-26 06:02:41.353975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.079 [2024-07-26 06:02:41.367905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.079 [2024-07-26 06:02:41.367945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.079 [2024-07-26 06:02:41.382210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.079 [2024-07-26 06:02:41.382247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.079 [2024-07-26 06:02:41.396640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.079 [2024-07-26 06:02:41.396695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.079 [2024-07-26 06:02:41.411987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.079 [2024-07-26 06:02:41.412028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.338 [2024-07-26 06:02:41.426945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.338 [2024-07-26 06:02:41.426986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.338 [2024-07-26 06:02:41.442263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.338 [2024-07-26 06:02:41.442299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.338 [2024-07-26 06:02:41.456935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.338 [2024-07-26 06:02:41.456971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.338 [2024-07-26 06:02:41.471086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.338 [2024-07-26 06:02:41.471132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.338 [2024-07-26 06:02:41.485590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.338 [2024-07-26 06:02:41.485644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.338 [2024-07-26 06:02:41.500069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.338 [2024-07-26 06:02:41.500124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.338 [2024-07-26 06:02:41.514906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.338 [2024-07-26 06:02:41.514945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.338 [2024-07-26 06:02:41.529372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.338 [2024-07-26 06:02:41.529408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.338 [2024-07-26 06:02:41.543675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.338 [2024-07-26 06:02:41.543711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.338 [2024-07-26 06:02:41.557415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.338 [2024-07-26 06:02:41.557455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.338 [2024-07-26 06:02:41.572037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.338 [2024-07-26 06:02:41.572086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.338 [2024-07-26 06:02:41.586669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.338 [2024-07-26 06:02:41.586706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.338 [2024-07-26 06:02:41.601234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.338 [2024-07-26 06:02:41.601270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.338 [2024-07-26 06:02:41.616107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.338 [2024-07-26 06:02:41.616143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.338 [2024-07-26 06:02:41.630671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.338 [2024-07-26 06:02:41.630726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.338 [2024-07-26 06:02:41.645609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.338 [2024-07-26 06:02:41.645646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.338 [2024-07-26 06:02:41.660723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.338 [2024-07-26 06:02:41.660759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.598 [2024-07-26 06:02:41.674319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.598 [2024-07-26 06:02:41.674373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.598 [2024-07-26 06:02:41.689222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.598 [2024-07-26 06:02:41.689257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.598 [2024-07-26 06:02:41.703507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.598 [2024-07-26 06:02:41.703560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.598 [2024-07-26 06:02:41.718435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.598 [2024-07-26 06:02:41.718471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.598 [2024-07-26 06:02:41.732650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.598 [2024-07-26 06:02:41.732687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.598 [2024-07-26 06:02:41.747163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.598 [2024-07-26 06:02:41.747207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.598 [2024-07-26 06:02:41.761480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.598 [2024-07-26 06:02:41.761518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.598 [2024-07-26 06:02:41.775983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.598 [2024-07-26 06:02:41.776038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.598 [2024-07-26 06:02:41.790852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.598 [2024-07-26 06:02:41.790892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.598 [2024-07-26 06:02:41.806007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.598 [2024-07-26 06:02:41.806047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.598 [2024-07-26 06:02:41.820384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.598 [2024-07-26 06:02:41.820422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.598 [2024-07-26 06:02:41.834929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.598 [2024-07-26 06:02:41.834971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.598 [2024-07-26 06:02:41.849752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.598 [2024-07-26 06:02:41.849808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.598 [2024-07-26 06:02:41.864836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.598 [2024-07-26 06:02:41.864874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.598 [2024-07-26 06:02:41.879587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.598 [2024-07-26 06:02:41.879624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.598 [2024-07-26 06:02:41.893951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.598 [2024-07-26 06:02:41.893988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.598 [2024-07-26 06:02:41.907668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.598 [2024-07-26 06:02:41.907705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.598 [2024-07-26 06:02:41.921589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.598 [2024-07-26 06:02:41.921625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.858 [2024-07-26 06:02:41.935740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.858 [2024-07-26 06:02:41.935778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.858 [2024-07-26 06:02:41.950455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.858 [2024-07-26 06:02:41.950493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.858 [2024-07-26 06:02:41.965135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.858 [2024-07-26 06:02:41.965172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.858 [2024-07-26 06:02:41.979327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.858 [2024-07-26 06:02:41.979363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.858 [2024-07-26 06:02:41.993610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.858 [2024-07-26 06:02:41.993646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.858 [2024-07-26 06:02:42.007459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.858 [2024-07-26 06:02:42.007495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.858 [2024-07-26 06:02:42.022190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.858 [2024-07-26 06:02:42.022228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.858 [2024-07-26 06:02:42.036028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.858 [2024-07-26 06:02:42.036072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.858 [2024-07-26 06:02:42.050069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.858 [2024-07-26 06:02:42.050116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.858 [2024-07-26 06:02:42.064124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.858 [2024-07-26 06:02:42.064161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.858 [2024-07-26 06:02:42.078131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.858 [2024-07-26 06:02:42.078168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.858 [2024-07-26 06:02:42.092455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.858 [2024-07-26 06:02:42.092492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.858 [2024-07-26 06:02:42.106597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.858 [2024-07-26 06:02:42.106633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.858 [2024-07-26 06:02:42.120122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.858 [2024-07-26 06:02:42.120158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.858 [2024-07-26 06:02:42.134334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.858 [2024-07-26 06:02:42.134371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.858 [2024-07-26 06:02:42.148786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.858 [2024-07-26 06:02:42.148823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.858 [2024-07-26 06:02:42.163036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.858 [2024-07-26 06:02:42.163082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.858 [2024-07-26 06:02:42.177153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.858 [2024-07-26 06:02:42.177190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.118 [2024-07-26 06:02:42.191796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.118 [2024-07-26 06:02:42.191852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.118 [2024-07-26 06:02:42.206039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.118 [2024-07-26 06:02:42.206090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.118 [2024-07-26 06:02:42.220528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.118 [2024-07-26 06:02:42.220568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.118 [2024-07-26 06:02:42.234918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.118 [2024-07-26 06:02:42.234968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.118 [2024-07-26 06:02:42.249873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.118 [2024-07-26 06:02:42.249913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.118 [2024-07-26 06:02:42.265146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.118 [2024-07-26 06:02:42.265182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.118 [2024-07-26 06:02:42.279373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.118 [2024-07-26 06:02:42.279409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.118 [2024-07-26 06:02:42.292809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.118 [2024-07-26 06:02:42.292849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.118 [2024-07-26 06:02:42.307744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.118 [2024-07-26 06:02:42.307780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.118 [2024-07-26 06:02:42.322341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.118 [2024-07-26 06:02:42.322377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.118 [2024-07-26 06:02:42.337168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.118 [2024-07-26 06:02:42.337204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.118 [2024-07-26 06:02:42.352041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.118 [2024-07-26 06:02:42.352106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.118 [2024-07-26 06:02:42.365204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.118 [2024-07-26 06:02:42.365240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.118 [2024-07-26 06:02:42.379274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.118 [2024-07-26 06:02:42.379311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.118 [2024-07-26 06:02:42.393319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.118 [2024-07-26 06:02:42.393355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.118 [2024-07-26 06:02:42.408110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.118 [2024-07-26 06:02:42.408146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.118 [2024-07-26 06:02:42.422313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.118 [2024-07-26 06:02:42.422359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.118 [2024-07-26 06:02:42.437085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.118 [2024-07-26 06:02:42.437147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.378 [2024-07-26 06:02:42.451847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.378 [2024-07-26 06:02:42.451898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.378 [2024-07-26 06:02:42.466393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.378 [2024-07-26 06:02:42.466429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.378 [2024-07-26 06:02:42.480801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.378 [2024-07-26 06:02:42.480841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.378 [2024-07-26 06:02:42.495853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.378 [2024-07-26 06:02:42.495892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.378 [2024-07-26 06:02:42.508478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.378 [2024-07-26 06:02:42.508515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.378 [2024-07-26 06:02:42.521896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.378 [2024-07-26 06:02:42.521936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.378 [2024-07-26 06:02:42.536134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.378 [2024-07-26 06:02:42.536170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.378 [2024-07-26 06:02:42.550402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.378 [2024-07-26 06:02:42.550452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.378 [2024-07-26 06:02:42.564803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.378 [2024-07-26 06:02:42.564839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.378 [2024-07-26 06:02:42.579378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.378 [2024-07-26 06:02:42.579414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.378 [2024-07-26 06:02:42.593634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.378 [2024-07-26 06:02:42.593671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.378 [2024-07-26 06:02:42.608543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.378 [2024-07-26 06:02:42.608583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.378 [2024-07-26 06:02:42.622969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.378 [2024-07-26 06:02:42.623011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.378 [2024-07-26 06:02:42.638320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.378 [2024-07-26 06:02:42.638357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.378 [2024-07-26 06:02:42.653396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.378 [2024-07-26 06:02:42.653431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.378 [2024-07-26 06:02:42.666783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.378 [2024-07-26 06:02:42.666819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.378 [2024-07-26 06:02:42.680932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.378 [2024-07-26 06:02:42.680988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.378 [2024-07-26 06:02:42.695353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.378 [2024-07-26 06:02:42.695389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.378 [2024-07-26 06:02:42.709968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.378 [2024-07-26 06:02:42.710022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.639 [2024-07-26 06:02:42.724280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.639 [2024-07-26 06:02:42.724316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.639 [2024-07-26 06:02:42.738419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.639 [2024-07-26 06:02:42.738453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.639 [2024-07-26 06:02:42.752393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.639 [2024-07-26 06:02:42.752448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.639 [2024-07-26 06:02:42.767032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.639 [2024-07-26 06:02:42.767106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.639 [2024-07-26 06:02:42.781368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.639 [2024-07-26 06:02:42.781404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.639 [2024-07-26 06:02:42.795592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.639 [2024-07-26 06:02:42.795632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.639 [2024-07-26 06:02:42.810018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.639 [2024-07-26 06:02:42.810070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.639 [2024-07-26 06:02:42.824650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.639 [2024-07-26 06:02:42.824711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.639 [2024-07-26 06:02:42.839084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.639 [2024-07-26 06:02:42.839121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.639 [2024-07-26 06:02:42.853439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.639 [2024-07-26 06:02:42.853475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.639 [2024-07-26 06:02:42.867680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.639 [2024-07-26 06:02:42.867720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.639 [2024-07-26 06:02:42.882191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.639 [2024-07-26 06:02:42.882228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.639 [2024-07-26 06:02:42.896700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.639 [2024-07-26 06:02:42.896736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.639 [2024-07-26 06:02:42.911597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.639 [2024-07-26 06:02:42.911634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.639 [2024-07-26 06:02:42.926280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.639 [2024-07-26 06:02:42.926317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.639 [2024-07-26 06:02:42.940512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.639 [2024-07-26 06:02:42.940567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.639 [2024-07-26 06:02:42.954697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.639 [2024-07-26 06:02:42.954732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.639 [2024-07-26 06:02:42.968983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.639 [2024-07-26 06:02:42.969037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.900 [2024-07-26 06:02:42.983620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.900 [2024-07-26 06:02:42.983657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.900 [2024-07-26 06:02:42.998313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.900 [2024-07-26 06:02:42.998366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.900 [2024-07-26 06:02:43.012412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.900 [2024-07-26 06:02:43.012448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.900 [2024-07-26 06:02:43.026253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.900 [2024-07-26 06:02:43.026289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.900 [2024-07-26 06:02:43.041169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.900 [2024-07-26 06:02:43.041205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.900 [2024-07-26 06:02:43.055134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.900 [2024-07-26 06:02:43.055170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.900 [2024-07-26 06:02:43.069372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.900 [2024-07-26 06:02:43.069426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.900 [2024-07-26 06:02:43.083704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.900 [2024-07-26 06:02:43.083759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.900 [2024-07-26 06:02:43.097821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.900 [2024-07-26 06:02:43.097887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.900 [2024-07-26 06:02:43.111723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.900 [2024-07-26 06:02:43.111759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.900 [2024-07-26 06:02:43.126440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.900 [2024-07-26 06:02:43.126477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.900 [2024-07-26 06:02:43.140963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.900 [2024-07-26 06:02:43.141003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.900 [2024-07-26 06:02:43.155392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.900 [2024-07-26 06:02:43.155433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.900 [2024-07-26 06:02:43.169158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.900 [2024-07-26 06:02:43.169193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.900 [2024-07-26 06:02:43.183402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.900 [2024-07-26 06:02:43.183442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.900 [2024-07-26 06:02:43.197371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.900 [2024-07-26 06:02:43.197410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.900 [2024-07-26 06:02:43.211502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.900 [2024-07-26 06:02:43.211540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.900 [2024-07-26 06:02:43.225285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.900 [2024-07-26 06:02:43.225322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.172 [2024-07-26 06:02:43.238877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.172 [2024-07-26 06:02:43.238914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.172 [2024-07-26 06:02:43.253267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.172 [2024-07-26 06:02:43.253303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.172 [2024-07-26 06:02:43.267404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.172 [2024-07-26 06:02:43.267440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.172 [2024-07-26 06:02:43.281419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.172 [2024-07-26 06:02:43.281456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.172 [2024-07-26 06:02:43.296034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.172 [2024-07-26 06:02:43.296078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.172 [2024-07-26 06:02:43.309678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.172 [2024-07-26 06:02:43.309714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.172 [2024-07-26 06:02:43.323768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.172 [2024-07-26 06:02:43.323804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.172 [2024-07-26 06:02:43.337727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.172 [2024-07-26 06:02:43.337765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.172 [2024-07-26 06:02:43.351760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.172 [2024-07-26 06:02:43.351797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.172 [2024-07-26 06:02:43.366067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.172 [2024-07-26 06:02:43.366115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.172 [2024-07-26 06:02:43.379650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.172 [2024-07-26 06:02:43.379686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.172 [2024-07-26 06:02:43.393285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.172 [2024-07-26 06:02:43.393321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.172 [2024-07-26 06:02:43.406909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.172 [2024-07-26 06:02:43.406946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.172 [2024-07-26 06:02:43.421241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.172 [2024-07-26 06:02:43.421279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.172 [2024-07-26 06:02:43.435662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.172 [2024-07-26 06:02:43.435698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.172 [2024-07-26 06:02:43.450244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.172 [2024-07-26 06:02:43.450281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.172 [2024-07-26 06:02:43.464291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.172 [2024-07-26 06:02:43.464328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.172 [2024-07-26 06:02:43.478150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.172 [2024-07-26 06:02:43.478187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.172 [2024-07-26 06:02:43.491727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.172 [2024-07-26 06:02:43.491763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.437 [2024-07-26 06:02:43.505558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.437 [2024-07-26 06:02:43.505595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.437 [2024-07-26 06:02:43.519567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.437 [2024-07-26 06:02:43.519604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.437 [2024-07-26 06:02:43.533533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.437 [2024-07-26 06:02:43.533570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.437 [2024-07-26 06:02:43.547037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.437 [2024-07-26 06:02:43.547082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.437 [2024-07-26 06:02:43.560755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.437 [2024-07-26 06:02:43.560792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.437 [2024-07-26 06:02:43.576029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.437 [2024-07-26 06:02:43.576095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.437 [2024-07-26 06:02:43.590773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.437 [2024-07-26 06:02:43.590810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.437 [2024-07-26 06:02:43.604493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.437 [2024-07-26 06:02:43.604529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.437 [2024-07-26 06:02:43.618448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.437 [2024-07-26 06:02:43.618503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.437 [2024-07-26 06:02:43.632388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.437 [2024-07-26 06:02:43.632452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.437 [2024-07-26 06:02:43.646802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.437 [2024-07-26 06:02:43.646842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.437 [2024-07-26 06:02:43.661298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.437 [2024-07-26 06:02:43.661334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.437 [2024-07-26 06:02:43.675636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.438 [2024-07-26 06:02:43.675677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.438 [2024-07-26 06:02:43.689662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.438 [2024-07-26 06:02:43.689715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.438 [2024-07-26 06:02:43.703677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.438 [2024-07-26 06:02:43.703717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.438 [2024-07-26 06:02:43.718179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.438 [2024-07-26 06:02:43.718215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.438 [2024-07-26 06:02:43.732680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.438 [2024-07-26 06:02:43.732735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.438 [2024-07-26 06:02:43.747214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.438 [2024-07-26 06:02:43.747250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.438 [2024-07-26 06:02:43.761542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.438 [2024-07-26 06:02:43.761582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.696 [2024-07-26 06:02:43.776158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.696 [2024-07-26 06:02:43.776195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.696 [2024-07-26 06:02:43.791107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.696 [2024-07-26 06:02:43.791144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.696 [2024-07-26 06:02:43.805910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.696 [2024-07-26 06:02:43.805964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.696 [2024-07-26 06:02:43.819863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.696 [2024-07-26 06:02:43.819898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.696 [2024-07-26 06:02:43.834251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.696 [2024-07-26 06:02:43.834288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.696 [2024-07-26 06:02:43.848323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.696 [2024-07-26 06:02:43.848377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.696 [2024-07-26 06:02:43.862785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.696 [2024-07-26 06:02:43.862825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.696 [2024-07-26 06:02:43.877135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.697 [2024-07-26 06:02:43.877171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.697 [2024-07-26 06:02:43.891330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.697 [2024-07-26 06:02:43.891367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.697 [2024-07-26 06:02:43.905491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.697 [2024-07-26 06:02:43.905541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.697 [2024-07-26 06:02:43.919137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.697 [2024-07-26 06:02:43.919174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.697 [2024-07-26 06:02:43.933424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.697 [2024-07-26 06:02:43.933460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.697 [2024-07-26 06:02:43.947456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.697 [2024-07-26 06:02:43.947492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.697 [2024-07-26 06:02:43.961821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.697 [2024-07-26 06:02:43.961857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.697 [2024-07-26 06:02:43.976056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.697 [2024-07-26 06:02:43.976123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.697 [2024-07-26 06:02:43.990581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.697 [2024-07-26 06:02:43.990630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.697 [2024-07-26 06:02:44.005402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.697 [2024-07-26 06:02:44.005442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.697 [2024-07-26 06:02:44.020760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.697 [2024-07-26 06:02:44.020802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.955 [2024-07-26 06:02:44.035251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.955 [2024-07-26 06:02:44.035288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.955 [2024-07-26 06:02:44.050079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.955 [2024-07-26 06:02:44.050115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.955 [2024-07-26 06:02:44.064490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.955 [2024-07-26 06:02:44.064530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.955 [2024-07-26 06:02:44.079125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.955 [2024-07-26 06:02:44.079161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.955 [2024-07-26 06:02:44.093040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.955 [2024-07-26 06:02:44.093104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.955 [2024-07-26 06:02:44.107377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.955 [2024-07-26 06:02:44.107431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.955 [2024-07-26 06:02:44.121254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.955 [2024-07-26 06:02:44.121292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.955 [2024-07-26 06:02:44.135556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.955 [2024-07-26 06:02:44.135591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.955 [2024-07-26 06:02:44.149993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.955 [2024-07-26 06:02:44.150044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.955 [2024-07-26 06:02:44.164429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.955 [2024-07-26 06:02:44.164485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.955 [2024-07-26 06:02:44.178549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.955 [2024-07-26 06:02:44.178605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.955 [2024-07-26 06:02:44.192771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.955 [2024-07-26 06:02:44.192812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.955 [2024-07-26 06:02:44.207199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.955 [2024-07-26 06:02:44.207251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.955 [2024-07-26 06:02:44.220644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.955 [2024-07-26 06:02:44.220681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.955 [2024-07-26 06:02:44.234684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.955 [2024-07-26 06:02:44.234724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.955 [2024-07-26 06:02:44.248907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.955 [2024-07-26 06:02:44.248962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.955 [2024-07-26 06:02:44.263711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.955 [2024-07-26 06:02:44.263747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.955 [2024-07-26 06:02:44.278168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.955 [2024-07-26 06:02:44.278205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.215 [2024-07-26 06:02:44.292727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.215 [2024-07-26 06:02:44.292782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.215 [2024-07-26 06:02:44.306997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.215 [2024-07-26 06:02:44.307038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.215 [2024-07-26 06:02:44.321354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.215 [2024-07-26 06:02:44.321389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.215 [2024-07-26 06:02:44.336428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.215 [2024-07-26 06:02:44.336464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.215 [2024-07-26 06:02:44.351425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.215 [2024-07-26 06:02:44.351461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.215 [2024-07-26 06:02:44.365674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.215 [2024-07-26 06:02:44.365712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.215 [2024-07-26 06:02:44.379603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.215 [2024-07-26 06:02:44.379640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.215 [2024-07-26 06:02:44.393504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.215 [2024-07-26 06:02:44.393543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.215 [2024-07-26 06:02:44.408242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.215 [2024-07-26 06:02:44.408278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.215 [2024-07-26 06:02:44.422727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.215 [2024-07-26 06:02:44.422764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.215 [2024-07-26 06:02:44.439626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.215 [2024-07-26 06:02:44.439685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.215 [2024-07-26 06:02:44.453132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.215 [2024-07-26 06:02:44.453168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.215 [2024-07-26 06:02:44.467942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.215 [2024-07-26 06:02:44.467982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.215 [2024-07-26 06:02:44.482326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.215 [2024-07-26 06:02:44.482380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.215 [2024-07-26 06:02:44.496332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.215 [2024-07-26 06:02:44.496383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.215 [2024-07-26 06:02:44.509811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.215 [2024-07-26 06:02:44.509865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.215 [2024-07-26 06:02:44.524400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.215 [2024-07-26 06:02:44.524455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.215 [2024-07-26 06:02:44.538511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.215 [2024-07-26 06:02:44.538551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.553606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.553643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.567920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.567973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.582165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.582201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.591638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.591673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 00:10:33.475 Latency(us) 00:10:33.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:33.475 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:33.475 Nvme1n1 : 5.01 8832.76 69.01 0.00 0.00 14466.49 5437.06 23981.32 00:10:33.475 =================================================================================================================== 00:10:33.475 Total : 8832.76 69.01 0.00 0.00 14466.49 5437.06 23981.32 00:10:33.475 [2024-07-26 06:02:44.597306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.597358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.605382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.605430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.613365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.613396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.621423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.621459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.629436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.629473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.637472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.637507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.645604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.645665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.653646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.653712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.661514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.661547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.669552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.669586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.677561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.677594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.685596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.685629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.693619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.693652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.701652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.701686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.709691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.709725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.717716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.717750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.725728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.725771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.733855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.733917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.741820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.741874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.749783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.749816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.757807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.757840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.765802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.765834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.773845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.773877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.781885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.781928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.789876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.789909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.797930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.797963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.475 [2024-07-26 06:02:44.805915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.475 [2024-07-26 06:02:44.805947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.733 [2024-07-26 06:02:44.813959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.733 [2024-07-26 06:02:44.813993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.733 [2024-07-26 06:02:44.822008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.733 [2024-07-26 06:02:44.822044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.733 [2024-07-26 06:02:44.829985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:44.830018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:44.838026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:44.838071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:44.846054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:44.846111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:44.854073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:44.854119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:44.862122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:44.862150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:44.870132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:44.870163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:44.878184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:44.878218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:44.886325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:44.886391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:44.894227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:44.894258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:44.902224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:44.902253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:44.910239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:44.910267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:44.918231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:44.918260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:44.926273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:44.926301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:44.934280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:44.934315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:44.942387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:44.942430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:44.950513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:44.950581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:44.958497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:44.958565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:44.966590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:44.966661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:44.974449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:44.974484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:44.982443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:44.982477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:44.990498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:44.990533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:44.998502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:44.998535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:45.006533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:45.006567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:45.014566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:45.014599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:45.022563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:45.022596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:45.030603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:45.030638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:45.038591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:45.038620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:45.046628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:45.046663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:45.054667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:45.054702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.734 [2024-07-26 06:02:45.062670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.734 [2024-07-26 06:02:45.062704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.994 [2024-07-26 06:02:45.070715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.994 [2024-07-26 06:02:45.070749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.994 [2024-07-26 06:02:45.078745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.994 [2024-07-26 06:02:45.078779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.994 [2024-07-26 06:02:45.086765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.994 [2024-07-26 06:02:45.086807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.994 [2024-07-26 06:02:45.094797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.994 [2024-07-26 06:02:45.094831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.994 [2024-07-26 06:02:45.102802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.994 [2024-07-26 06:02:45.102836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.994 [2024-07-26 06:02:45.110808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.994 [2024-07-26 06:02:45.110842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.994 [2024-07-26 06:02:45.118850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.995 [2024-07-26 06:02:45.118884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.995 [2024-07-26 06:02:45.126945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.995 [2024-07-26 06:02:45.127004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.995 [2024-07-26 06:02:45.135018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.995 [2024-07-26 06:02:45.135092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.995 [2024-07-26 06:02:45.142918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.995 [2024-07-26 06:02:45.142953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.995 [2024-07-26 06:02:45.150933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.995 [2024-07-26 06:02:45.150967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.995 [2024-07-26 06:02:45.158969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.995 [2024-07-26 06:02:45.159003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.995 [2024-07-26 06:02:45.166984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.995 [2024-07-26 06:02:45.167017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.995 [2024-07-26 06:02:45.174996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.995 [2024-07-26 06:02:45.175031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.995 [2024-07-26 06:02:45.183057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.995 [2024-07-26 06:02:45.183113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.995 [2024-07-26 06:02:45.191037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.995 [2024-07-26 06:02:45.191080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.995 [2024-07-26 06:02:45.199084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.995 [2024-07-26 06:02:45.199129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.995 [2024-07-26 06:02:45.207119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.995 [2024-07-26 06:02:45.207148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.995 [2024-07-26 06:02:45.215133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.995 [2024-07-26 06:02:45.215163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.995 [2024-07-26 06:02:45.223188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.995 [2024-07-26 06:02:45.223218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.995 [2024-07-26 06:02:45.231204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.995 [2024-07-26 06:02:45.231234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.995 [2024-07-26 06:02:45.239186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.995 [2024-07-26 06:02:45.239225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.995 [2024-07-26 06:02:45.247219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.995 [2024-07-26 06:02:45.247252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.995 [2024-07-26 06:02:45.255369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.995 [2024-07-26 06:02:45.255437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.995 [2024-07-26 06:02:45.263291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.995 [2024-07-26 06:02:45.263322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.995 [2024-07-26 06:02:45.271279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.995 [2024-07-26 06:02:45.271307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.995 [2024-07-26 06:02:45.279318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.995 [2024-07-26 06:02:45.279362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.995 [2024-07-26 06:02:45.287321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.995 [2024-07-26 06:02:45.287365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.995 [2024-07-26 06:02:45.295364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.995 [2024-07-26 06:02:45.295392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.995 [2024-07-26 06:02:45.303369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.995 [2024-07-26 06:02:45.303412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.995 [2024-07-26 06:02:45.311416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.995 [2024-07-26 06:02:45.311448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.995 [2024-07-26 06:02:45.319428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.995 [2024-07-26 06:02:45.319461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.327485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.327520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.335521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.335555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.343535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.343576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.351698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.351765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.359578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.359614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.367588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.367623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.375663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.375697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.383628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.383663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.391672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.391707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.399694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.399729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.407692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.407725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.415741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.415774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.423784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.423818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.431757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.431800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.439799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.439834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.447806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.447843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.455989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.456052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.463871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.463905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.471895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.471929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.479921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.479956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.487932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.487965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.495938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.495972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.504134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.504191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.512028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.512081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.520026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.520068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.528043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.528087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.536048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.536106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.544118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.544148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.552138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.552167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.560136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.560164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.568194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.568223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.576179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.256 [2024-07-26 06:02:45.576207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.256 [2024-07-26 06:02:45.584221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.257 [2024-07-26 06:02:45.584251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.517 [2024-07-26 06:02:45.592239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.517 [2024-07-26 06:02:45.592270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.517 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:10:34.517 [2024-07-26 06:02:45.600241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.517 [2024-07-26 06:02:45.600270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.517 [2024-07-26 06:02:45.608274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.517 [2024-07-26 06:02:45.608303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.517 [2024-07-26 06:02:45.616295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.517 [2024-07-26 06:02:45.616324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.517 [2024-07-26 06:02:45.624305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.517 [2024-07-26 06:02:45.624332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.517 [2024-07-26 06:02:45.632402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.517 [2024-07-26 06:02:45.632461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.517 [2024-07-26 06:02:45.640374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.517 [2024-07-26 06:02:45.640420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.517 [2024-07-26 06:02:45.648426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.517 [2024-07-26 06:02:45.648460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.517 [2024-07-26 06:02:45.656458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.517 [2024-07-26 06:02:45.656494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (63243) - No such process 00:10:34.517 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 63243 00:10:34.517 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.517 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.517 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.517 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.517 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:34.517 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.517 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.517 delay0 00:10:34.517 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.517 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:34.517 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.517 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.517 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.517 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:34.517 EAL: No free 2048 kB hugepages reported on node 1 00:10:34.777 [2024-07-26 06:02:45.876706] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:41.347 Initializing NVMe Controllers 00:10:41.347 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:41.347 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:41.347 Initialization complete. Launching workers. 00:10:41.347 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 135 00:10:41.347 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 422, failed to submit 33 00:10:41.347 success 263, unsuccess 159, failed 0 00:10:41.347 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:41.347 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:41.347 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:41.347 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:41.347 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:41.347 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:41.347 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:41.347 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:41.347 rmmod nvme_tcp 00:10:41.347 rmmod nvme_fabrics 00:10:41.347 rmmod nvme_keyring 00:10:41.347 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:41.347 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:41.347 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:41.347 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 61636 ']' 00:10:41.347 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 61636 00:10:41.347 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 61636 ']' 00:10:41.347 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 61636 00:10:41.347 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:41.347 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:41.347 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61636 00:10:41.347 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:41.347 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:41.347 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61636' 00:10:41.347 killing process with pid 61636 00:10:41.347 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 61636 00:10:41.347 06:02:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 61636 00:10:42.284 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:10:42.284 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:42.284 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:42.284 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:42.284 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:42.284 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:42.284 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.284 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.284 06:02:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.823 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:44.823 00:10:44.823 real 0m32.413s 00:10:44.823 user 0m48.602s 00:10:44.823 sys 0m8.154s 00:10:44.823 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:44.823 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:44.823 ************************************ 00:10:44.823 END TEST nvmf_zcopy 00:10:44.823 ************************************ 00:10:44.823 06:02:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:44.823 06:02:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:44.823 06:02:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:44.823 06:02:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:44.823 ************************************ 00:10:44.823 START TEST nvmf_nmic 00:10:44.823 ************************************ 00:10:44.823 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:44.823 * Looking for test storage... 00:10:44.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.823 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:10:44.824 06:02:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:46.763 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:46.763 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:46.763 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:46.763 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:46.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:46.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:10:46.763 00:10:46.763 --- 10.0.0.2 ping statistics --- 00:10:46.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.763 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:46.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:46.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:10:46.763 00:10:46.763 --- 10.0.0.1 ping statistics --- 00:10:46.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.763 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=66895 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 66895 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 66895 ']' 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:46.763 06:02:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.763 [2024-07-26 06:02:57.967361] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:46.763 [2024-07-26 06:02:57.967515] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.763 EAL: No free 2048 kB hugepages reported on node 1 00:10:47.021 [2024-07-26 06:02:58.110287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:47.279 [2024-07-26 06:02:58.377253] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.279 [2024-07-26 06:02:58.377336] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.279 [2024-07-26 06:02:58.377363] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.279 [2024-07-26 06:02:58.377385] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.279 [2024-07-26 06:02:58.377406] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.279 [2024-07-26 06:02:58.377527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.279 [2024-07-26 06:02:58.377839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.279 [2024-07-26 06:02:58.377902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.279 [2024-07-26 06:02:58.377923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:47.850 06:02:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:47.850 06:02:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:47.850 06:02:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:47.850 06:02:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:47.850 06:02:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.850 06:02:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.850 06:02:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:47.850 06:02:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.850 06:02:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.850 [2024-07-26 06:02:58.913151] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:47.850 06:02:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.850 06:02:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:47.850 06:02:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.850 06:02:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.850 Malloc0 00:10:47.850 06:02:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.850 06:02:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:47.850 06:02:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.850 06:02:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.850 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.850 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:47.850 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.850 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.850 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.850 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.850 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.850 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.850 [2024-07-26 06:02:59.018748] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:47.850 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.850 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:47.850 test case1: single bdev can't be used in multiple subsystems 00:10:47.850 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:47.850 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.850 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.850 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.850 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:47.850 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.850 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.850 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.850 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:47.850 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:47.850 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.850 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.850 [2024-07-26 06:02:59.042520] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:47.850 [2024-07-26 06:02:59.042578] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:47.850 [2024-07-26 06:02:59.042602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.850 request: 00:10:47.850 { 00:10:47.850 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:47.850 "namespace": { 00:10:47.850 "bdev_name": "Malloc0", 00:10:47.850 "no_auto_visible": false 00:10:47.850 }, 00:10:47.850 "method": "nvmf_subsystem_add_ns", 00:10:47.850 "req_id": 1 00:10:47.850 } 00:10:47.850 Got JSON-RPC error response 00:10:47.850 response: 00:10:47.850 { 00:10:47.850 "code": -32602, 00:10:47.850 "message": "Invalid parameters" 00:10:47.850 } 00:10:47.850 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:47.850 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:47.850 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:47.851 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:47.851 Adding namespace failed - expected result. 00:10:47.851 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:47.851 test case2: host connect to nvmf target in multiple paths 00:10:47.851 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:47.851 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.851 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.851 [2024-07-26 06:02:59.050671] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:47.851 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.851 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:48.419 06:02:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:49.356 06:03:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:49.356 06:03:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:49.356 06:03:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:49.356 06:03:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:49.356 06:03:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:51.256 06:03:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:51.256 06:03:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:51.256 06:03:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:51.256 06:03:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:51.256 06:03:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:51.256 06:03:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:51.256 06:03:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:51.256 [global] 00:10:51.256 thread=1 00:10:51.256 invalidate=1 00:10:51.256 rw=write 00:10:51.256 time_based=1 00:10:51.256 runtime=1 00:10:51.256 ioengine=libaio 00:10:51.256 direct=1 00:10:51.256 bs=4096 00:10:51.256 iodepth=1 00:10:51.256 norandommap=0 00:10:51.256 numjobs=1 00:10:51.256 00:10:51.256 verify_dump=1 00:10:51.256 verify_backlog=512 00:10:51.256 verify_state_save=0 00:10:51.256 do_verify=1 00:10:51.256 verify=crc32c-intel 00:10:51.256 [job0] 00:10:51.256 filename=/dev/nvme0n1 00:10:51.256 Could not set queue depth (nvme0n1) 00:10:51.256 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:51.256 fio-3.35 00:10:51.256 Starting 1 thread 00:10:52.646 00:10:52.646 job0: (groupid=0, jobs=1): err= 0: pid=67653: Fri Jul 26 06:03:03 2024 00:10:52.646 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:52.646 slat (nsec): min=6193, max=58394, avg=14970.93, stdev=5176.18 00:10:52.646 clat (usec): min=304, max=41971, avg=542.59, stdev=2858.19 00:10:52.646 lat (usec): min=311, max=41988, avg=557.56, stdev=2858.33 00:10:52.646 clat percentiles (usec): 00:10:52.646 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 326], 20.00th=[ 330], 00:10:52.646 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 343], 60.00th=[ 343], 00:10:52.646 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 363], 95.00th=[ 371], 00:10:52.646 | 99.00th=[ 441], 99.50th=[ 685], 99.90th=[42206], 99.95th=[42206], 00:10:52.646 | 99.99th=[42206] 00:10:52.646 write: IOPS=1463, BW=5854KiB/s (5995kB/s)(5860KiB/1001msec); 0 zone resets 00:10:52.646 slat (usec): min=7, max=29672, avg=38.19, stdev=774.80 00:10:52.646 clat (usec): min=187, max=420, avg=246.62, stdev=31.82 00:10:52.646 lat (usec): min=197, max=29970, avg=284.81, stdev=776.94 00:10:52.646 clat percentiles (usec): 00:10:52.646 | 1.00th=[ 200], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 225], 00:10:52.646 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 245], 00:10:52.646 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 302], 00:10:52.646 | 99.00th=[ 388], 99.50th=[ 396], 99.90th=[ 420], 99.95th=[ 420], 00:10:52.646 | 99.99th=[ 420] 00:10:52.646 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:10:52.646 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:52.646 lat (usec) : 250=39.33%, 500=60.35%, 750=0.12% 00:10:52.646 lat (msec) : 50=0.20% 00:10:52.646 cpu : usr=3.40%, sys=5.30%, ctx=2492, majf=0, minf=2 00:10:52.646 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.646 issued rwts: total=1024,1465,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.646 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.646 00:10:52.646 Run status group 0 (all jobs): 00:10:52.646 READ: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:10:52.646 WRITE: bw=5854KiB/s (5995kB/s), 5854KiB/s-5854KiB/s (5995kB/s-5995kB/s), io=5860KiB (6001kB), run=1001-1001msec 00:10:52.646 00:10:52.646 Disk stats (read/write): 00:10:52.646 nvme0n1: ios=1051/1090, merge=0/0, ticks=1534/256, in_queue=1790, util=98.70% 00:10:52.646 06:03:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:52.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:52.646 06:03:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:52.646 06:03:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:52.646 06:03:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:52.646 06:03:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:52.646 06:03:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:52.646 06:03:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:52.646 06:03:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:52.646 06:03:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:52.646 06:03:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:52.646 06:03:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:52.646 06:03:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:52.646 06:03:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:52.646 06:03:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:52.646 06:03:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:52.646 06:03:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:52.906 rmmod nvme_tcp 00:10:52.906 rmmod nvme_fabrics 00:10:52.906 rmmod nvme_keyring 00:10:52.906 06:03:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:52.906 06:03:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:52.906 06:03:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:52.906 06:03:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 66895 ']' 00:10:52.906 06:03:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 66895 00:10:52.906 06:03:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 66895 ']' 00:10:52.906 06:03:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 66895 00:10:52.906 06:03:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:52.906 06:03:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:52.906 06:03:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66895 00:10:52.906 06:03:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:52.906 06:03:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:52.906 06:03:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66895' 00:10:52.906 killing process with pid 66895 00:10:52.906 06:03:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 66895 00:10:52.906 06:03:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 66895 00:10:54.285 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:10:54.285 06:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:54.285 06:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:54.285 06:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:54.285 06:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:54.285 06:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:54.285 06:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.285 06:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.285 06:03:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.189 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:56.189 00:10:56.189 real 0m11.831s 00:10:56.189 user 0m27.827s 00:10:56.189 sys 0m2.615s 00:10:56.189 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:56.189 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.189 ************************************ 00:10:56.189 END TEST nvmf_nmic 00:10:56.189 ************************************ 00:10:56.447 06:03:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:56.448 ************************************ 00:10:56.448 START TEST nvmf_fio_target 00:10:56.448 ************************************ 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:56.448 * Looking for test storage... 00:10:56.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:10:56.448 06:03:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.351 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:58.351 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:10:58.351 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:58.351 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:58.351 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:58.352 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:58.352 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:58.352 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:58.352 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:58.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:58.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:10:58.352 00:10:58.352 --- 10.0.0.2 ping statistics --- 00:10:58.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.352 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:58.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:58.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:10:58.352 00:10:58.352 --- 10.0.0.1 ping statistics --- 00:10:58.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.352 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:58.352 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:58.353 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:58.353 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:58.353 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.613 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=70367 00:10:58.613 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:58.613 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 70367 00:10:58.613 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 70367 ']' 00:10:58.613 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.613 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:58.613 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.613 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:58.613 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.613 [2024-07-26 06:03:09.776620] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:58.613 [2024-07-26 06:03:09.776762] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.613 EAL: No free 2048 kB hugepages reported on node 1 00:10:58.613 [2024-07-26 06:03:09.910691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:58.873 [2024-07-26 06:03:10.185935] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:58.873 [2024-07-26 06:03:10.186006] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:58.873 [2024-07-26 06:03:10.186027] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:58.873 [2024-07-26 06:03:10.186044] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:58.873 [2024-07-26 06:03:10.186068] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:58.873 [2024-07-26 06:03:10.186197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.873 [2024-07-26 06:03:10.186261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.873 [2024-07-26 06:03:10.186302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.873 [2024-07-26 06:03:10.186314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:59.440 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:59.440 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:59.440 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:59.440 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:59.440 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.440 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:59.440 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:59.698 [2024-07-26 06:03:10.944055] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:59.698 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:00.269 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:00.269 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:00.527 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:00.527 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:00.785 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:00.785 06:03:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:01.043 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:01.043 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:01.301 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:01.559 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:01.559 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:01.816 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:01.816 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:02.405 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:02.405 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:02.405 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:02.663 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:02.663 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:02.919 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:02.919 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:03.176 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:03.433 [2024-07-26 06:03:14.665741] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:03.433 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:03.690 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:03.949 06:03:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:04.518 06:03:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:04.518 06:03:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:04.518 06:03:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:04.518 06:03:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:04.518 06:03:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:04.518 06:03:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:07.051 06:03:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:07.051 06:03:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:07.051 06:03:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:07.051 06:03:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:07.051 06:03:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:07.051 06:03:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:07.051 06:03:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:07.051 [global] 00:11:07.051 thread=1 00:11:07.051 invalidate=1 00:11:07.051 rw=write 00:11:07.051 time_based=1 00:11:07.051 runtime=1 00:11:07.051 ioengine=libaio 00:11:07.051 direct=1 00:11:07.051 bs=4096 00:11:07.051 iodepth=1 00:11:07.051 norandommap=0 00:11:07.051 numjobs=1 00:11:07.051 00:11:07.051 verify_dump=1 00:11:07.051 verify_backlog=512 00:11:07.051 verify_state_save=0 00:11:07.051 do_verify=1 00:11:07.051 verify=crc32c-intel 00:11:07.051 [job0] 00:11:07.051 filename=/dev/nvme0n1 00:11:07.051 [job1] 00:11:07.051 filename=/dev/nvme0n2 00:11:07.051 [job2] 00:11:07.051 filename=/dev/nvme0n3 00:11:07.051 [job3] 00:11:07.051 filename=/dev/nvme0n4 00:11:07.051 Could not set queue depth (nvme0n1) 00:11:07.051 Could not set queue depth (nvme0n2) 00:11:07.051 Could not set queue depth (nvme0n3) 00:11:07.051 Could not set queue depth (nvme0n4) 00:11:07.051 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.051 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.051 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.051 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.051 fio-3.35 00:11:07.051 Starting 4 threads 00:11:07.992 00:11:07.992 job0: (groupid=0, jobs=1): err= 0: pid=71579: Fri Jul 26 06:03:19 2024 00:11:07.992 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:07.992 slat (nsec): min=6428, max=57715, avg=11223.92, stdev=4951.71 00:11:07.992 clat (usec): min=278, max=487, avg=320.20, stdev=21.25 00:11:07.992 lat (usec): min=286, max=505, avg=331.43, stdev=24.26 00:11:07.992 clat percentiles (usec): 00:11:07.992 | 1.00th=[ 285], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 302], 00:11:07.992 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 322], 00:11:07.992 | 70.00th=[ 330], 80.00th=[ 334], 90.00th=[ 347], 95.00th=[ 355], 00:11:07.992 | 99.00th=[ 388], 99.50th=[ 416], 99.90th=[ 457], 99.95th=[ 490], 00:11:07.992 | 99.99th=[ 490] 00:11:07.992 write: IOPS=1797, BW=7189KiB/s (7361kB/s)(7196KiB/1001msec); 0 zone resets 00:11:07.992 slat (nsec): min=8120, max=60247, avg=15861.01, stdev=7730.18 00:11:07.992 clat (usec): min=195, max=1954, avg=249.74, stdev=65.62 00:11:07.992 lat (usec): min=205, max=1976, avg=265.60, stdev=69.62 00:11:07.992 clat percentiles (usec): 00:11:07.992 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 215], 00:11:07.992 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 237], 60.00th=[ 251], 00:11:07.992 | 70.00th=[ 269], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 310], 00:11:07.992 | 99.00th=[ 383], 99.50th=[ 412], 99.90th=[ 1680], 99.95th=[ 1958], 00:11:07.992 | 99.99th=[ 1958] 00:11:07.992 bw ( KiB/s): min= 8192, max= 8192, per=48.25%, avg=8192.00, stdev= 0.00, samples=1 00:11:07.992 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:07.992 lat (usec) : 250=31.96%, 500=67.95%, 750=0.03% 00:11:07.992 lat (msec) : 2=0.06% 00:11:07.992 cpu : usr=3.60%, sys=5.90%, ctx=3336, majf=0, minf=1 00:11:07.992 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.992 issued rwts: total=1536,1799,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.992 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.992 job1: (groupid=0, jobs=1): err= 0: pid=71580: Fri Jul 26 06:03:19 2024 00:11:07.992 read: IOPS=1168, BW=4675KiB/s (4788kB/s)(4680KiB/1001msec) 00:11:07.992 slat (nsec): min=6225, max=36797, avg=11186.64, stdev=5290.66 00:11:07.992 clat (usec): min=357, max=41398, avg=486.28, stdev=1684.84 00:11:07.992 lat (usec): min=367, max=41415, avg=497.47, stdev=1685.53 00:11:07.992 clat percentiles (usec): 00:11:07.992 | 1.00th=[ 363], 5.00th=[ 371], 10.00th=[ 375], 20.00th=[ 383], 00:11:07.992 | 30.00th=[ 388], 40.00th=[ 396], 50.00th=[ 412], 60.00th=[ 424], 00:11:07.992 | 70.00th=[ 433], 80.00th=[ 449], 90.00th=[ 465], 95.00th=[ 478], 00:11:07.992 | 99.00th=[ 537], 99.50th=[ 545], 99.90th=[41157], 99.95th=[41157], 00:11:07.992 | 99.99th=[41157] 00:11:07.992 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:07.992 slat (nsec): min=7746, max=63946, avg=14739.27, stdev=7212.87 00:11:07.992 clat (usec): min=207, max=528, avg=250.78, stdev=35.22 00:11:07.992 lat (usec): min=217, max=550, avg=265.52, stdev=40.15 00:11:07.992 clat percentiles (usec): 00:11:07.992 | 1.00th=[ 212], 5.00th=[ 217], 10.00th=[ 219], 20.00th=[ 223], 00:11:07.992 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 249], 00:11:07.992 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 314], 00:11:07.992 | 99.00th=[ 363], 99.50th=[ 396], 99.90th=[ 441], 99.95th=[ 529], 00:11:07.992 | 99.99th=[ 529] 00:11:07.992 bw ( KiB/s): min= 5072, max= 5072, per=29.87%, avg=5072.00, stdev= 0.00, samples=1 00:11:07.992 iops : min= 1268, max= 1268, avg=1268.00, stdev= 0.00, samples=1 00:11:07.992 lat (usec) : 250=34.44%, 500=64.86%, 750=0.59% 00:11:07.992 lat (msec) : 2=0.04%, 50=0.07% 00:11:07.992 cpu : usr=3.40%, sys=4.00%, ctx=2707, majf=0, minf=1 00:11:07.992 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.992 issued rwts: total=1170,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.992 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.992 job2: (groupid=0, jobs=1): err= 0: pid=71581: Fri Jul 26 06:03:19 2024 00:11:07.992 read: IOPS=18, BW=74.0KiB/s (75.8kB/s)(76.0KiB/1027msec) 00:11:07.992 slat (nsec): min=15172, max=34832, avg=19386.00, stdev=6857.21 00:11:07.992 clat (usec): min=40897, max=45853, avg=41677.63, stdev=1145.68 00:11:07.992 lat (usec): min=40931, max=45871, avg=41697.02, stdev=1146.32 00:11:07.992 clat percentiles (usec): 00:11:07.992 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:07.992 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[42206], 00:11:07.992 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[45876], 00:11:07.992 | 99.00th=[45876], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:11:07.992 | 99.99th=[45876] 00:11:07.992 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:11:07.992 slat (nsec): min=8444, max=76840, avg=29224.51, stdev=12879.30 00:11:07.992 clat (usec): min=248, max=1493, avg=421.32, stdev=112.50 00:11:07.992 lat (usec): min=257, max=1538, avg=450.54, stdev=116.25 00:11:07.992 clat percentiles (usec): 00:11:07.992 | 1.00th=[ 253], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 334], 00:11:07.992 | 30.00th=[ 367], 40.00th=[ 396], 50.00th=[ 420], 60.00th=[ 449], 00:11:07.992 | 70.00th=[ 474], 80.00th=[ 510], 90.00th=[ 537], 95.00th=[ 570], 00:11:07.992 | 99.00th=[ 644], 99.50th=[ 734], 99.90th=[ 1500], 99.95th=[ 1500], 00:11:07.992 | 99.99th=[ 1500] 00:11:07.992 bw ( KiB/s): min= 4096, max= 4096, per=24.13%, avg=4096.00, stdev= 0.00, samples=1 00:11:07.992 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:07.992 lat (usec) : 250=0.38%, 500=74.39%, 750=21.28% 00:11:07.992 lat (msec) : 2=0.38%, 50=3.58% 00:11:07.992 cpu : usr=1.17%, sys=1.66%, ctx=532, majf=0, minf=1 00:11:07.992 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.992 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.992 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.992 job3: (groupid=0, jobs=1): err= 0: pid=71582: Fri Jul 26 06:03:19 2024 00:11:07.992 read: IOPS=19, BW=79.8KiB/s (81.7kB/s)(80.0KiB/1003msec) 00:11:07.992 slat (nsec): min=13446, max=34204, avg=18741.60, stdev=6746.29 00:11:07.992 clat (usec): min=40930, max=42408, avg=41245.05, stdev=487.72 00:11:07.992 lat (usec): min=40957, max=42425, avg=41263.79, stdev=486.00 00:11:07.992 clat percentiles (usec): 00:11:07.992 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:07.992 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:07.992 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:11:07.992 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:07.992 | 99.99th=[42206] 00:11:07.992 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:11:07.992 slat (nsec): min=7135, max=68908, avg=22731.74, stdev=11860.99 00:11:07.992 clat (usec): min=219, max=1014, avg=318.47, stdev=82.72 00:11:07.992 lat (usec): min=242, max=1043, avg=341.20, stdev=87.56 00:11:07.992 clat percentiles (usec): 00:11:07.992 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 247], 00:11:07.992 | 30.00th=[ 258], 40.00th=[ 269], 50.00th=[ 289], 60.00th=[ 330], 00:11:07.992 | 70.00th=[ 371], 80.00th=[ 400], 90.00th=[ 420], 95.00th=[ 441], 00:11:07.992 | 99.00th=[ 506], 99.50th=[ 619], 99.90th=[ 1012], 99.95th=[ 1012], 00:11:07.992 | 99.99th=[ 1012] 00:11:07.992 bw ( KiB/s): min= 4096, max= 4096, per=24.13%, avg=4096.00, stdev= 0.00, samples=1 00:11:07.992 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:07.992 lat (usec) : 250=22.74%, 500=72.37%, 750=0.75%, 1000=0.19% 00:11:07.992 lat (msec) : 2=0.19%, 50=3.76% 00:11:07.992 cpu : usr=0.50%, sys=1.20%, ctx=533, majf=0, minf=2 00:11:07.992 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.992 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.992 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.992 00:11:07.992 Run status group 0 (all jobs): 00:11:07.992 READ: bw=10.4MiB/s (10.9MB/s), 74.0KiB/s-6138KiB/s (75.8kB/s-6285kB/s), io=10.7MiB (11.2MB), run=1001-1027msec 00:11:07.992 WRITE: bw=16.6MiB/s (17.4MB/s), 1994KiB/s-7189KiB/s (2042kB/s-7361kB/s), io=17.0MiB (17.9MB), run=1001-1027msec 00:11:07.992 00:11:07.992 Disk stats (read/write): 00:11:07.992 nvme0n1: ios=1313/1536, merge=0/0, ticks=1260/368, in_queue=1628, util=85.77% 00:11:07.992 nvme0n2: ios=1080/1213, merge=0/0, ticks=774/288, in_queue=1062, util=91.35% 00:11:07.992 nvme0n3: ios=37/512, merge=0/0, ticks=1489/197, in_queue=1686, util=93.73% 00:11:07.992 nvme0n4: ios=79/512, merge=0/0, ticks=1271/152, in_queue=1423, util=95.89% 00:11:07.992 06:03:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:07.992 [global] 00:11:07.992 thread=1 00:11:07.992 invalidate=1 00:11:07.992 rw=randwrite 00:11:07.992 time_based=1 00:11:07.992 runtime=1 00:11:07.992 ioengine=libaio 00:11:07.992 direct=1 00:11:07.992 bs=4096 00:11:07.992 iodepth=1 00:11:07.992 norandommap=0 00:11:07.992 numjobs=1 00:11:07.992 00:11:07.992 verify_dump=1 00:11:07.992 verify_backlog=512 00:11:07.992 verify_state_save=0 00:11:07.992 do_verify=1 00:11:07.992 verify=crc32c-intel 00:11:07.992 [job0] 00:11:07.992 filename=/dev/nvme0n1 00:11:07.992 [job1] 00:11:07.992 filename=/dev/nvme0n2 00:11:07.992 [job2] 00:11:07.992 filename=/dev/nvme0n3 00:11:07.992 [job3] 00:11:07.992 filename=/dev/nvme0n4 00:11:07.992 Could not set queue depth (nvme0n1) 00:11:07.992 Could not set queue depth (nvme0n2) 00:11:07.992 Could not set queue depth (nvme0n3) 00:11:07.992 Could not set queue depth (nvme0n4) 00:11:08.252 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:08.252 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:08.252 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:08.252 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:08.252 fio-3.35 00:11:08.252 Starting 4 threads 00:11:09.633 00:11:09.633 job0: (groupid=0, jobs=1): err= 0: pid=71808: Fri Jul 26 06:03:20 2024 00:11:09.633 read: IOPS=180, BW=720KiB/s (737kB/s)(736KiB/1022msec) 00:11:09.633 slat (nsec): min=5867, max=35688, avg=11791.85, stdev=6164.29 00:11:09.633 clat (usec): min=388, max=41565, avg=4267.91, stdev=11773.38 00:11:09.633 lat (usec): min=401, max=41578, avg=4279.70, stdev=11776.55 00:11:09.633 clat percentiles (usec): 00:11:09.633 | 1.00th=[ 396], 5.00th=[ 420], 10.00th=[ 433], 20.00th=[ 445], 00:11:09.633 | 30.00th=[ 457], 40.00th=[ 490], 50.00th=[ 519], 60.00th=[ 545], 00:11:09.633 | 70.00th=[ 570], 80.00th=[ 586], 90.00th=[ 1385], 95.00th=[41157], 00:11:09.633 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:09.633 | 99.99th=[41681] 00:11:09.633 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:11:09.633 slat (nsec): min=9438, max=72791, avg=27273.50, stdev=12523.05 00:11:09.633 clat (usec): min=265, max=1237, avg=420.56, stdev=93.76 00:11:09.633 lat (usec): min=277, max=1249, avg=447.83, stdev=96.13 00:11:09.633 clat percentiles (usec): 00:11:09.633 | 1.00th=[ 277], 5.00th=[ 297], 10.00th=[ 314], 20.00th=[ 343], 00:11:09.633 | 30.00th=[ 367], 40.00th=[ 388], 50.00th=[ 408], 60.00th=[ 437], 00:11:09.633 | 70.00th=[ 461], 80.00th=[ 490], 90.00th=[ 537], 95.00th=[ 578], 00:11:09.633 | 99.00th=[ 660], 99.50th=[ 676], 99.90th=[ 1237], 99.95th=[ 1237], 00:11:09.633 | 99.99th=[ 1237] 00:11:09.633 bw ( KiB/s): min= 4096, max= 4096, per=34.07%, avg=4096.00, stdev= 0.00, samples=1 00:11:09.633 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:09.633 lat (usec) : 500=72.13%, 750=24.86% 00:11:09.633 lat (msec) : 2=0.57%, 50=2.44% 00:11:09.633 cpu : usr=1.67%, sys=1.47%, ctx=696, majf=0, minf=1 00:11:09.633 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:09.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.633 issued rwts: total=184,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.633 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:09.633 job1: (groupid=0, jobs=1): err= 0: pid=71809: Fri Jul 26 06:03:20 2024 00:11:09.633 read: IOPS=73, BW=296KiB/s (303kB/s)(300KiB/1014msec) 00:11:09.633 slat (nsec): min=6380, max=37227, avg=13223.20, stdev=9399.53 00:11:09.633 clat (usec): min=416, max=41970, avg=11409.81, stdev=18119.71 00:11:09.633 lat (usec): min=425, max=41985, avg=11423.03, stdev=18123.89 00:11:09.633 clat percentiles (usec): 00:11:09.633 | 1.00th=[ 416], 5.00th=[ 498], 10.00th=[ 519], 20.00th=[ 529], 00:11:09.633 | 30.00th=[ 545], 40.00th=[ 545], 50.00th=[ 553], 60.00th=[ 586], 00:11:09.633 | 70.00th=[ 709], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:11:09.633 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:09.633 | 99.99th=[42206] 00:11:09.633 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:11:09.633 slat (nsec): min=7496, max=53569, avg=20673.97, stdev=9251.49 00:11:09.633 clat (usec): min=227, max=517, avg=278.75, stdev=39.03 00:11:09.633 lat (usec): min=240, max=529, avg=299.43, stdev=39.79 00:11:09.633 clat percentiles (usec): 00:11:09.633 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 249], 00:11:09.633 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 277], 00:11:09.633 | 70.00th=[ 285], 80.00th=[ 302], 90.00th=[ 334], 95.00th=[ 351], 00:11:09.633 | 99.00th=[ 420], 99.50th=[ 469], 99.90th=[ 519], 99.95th=[ 519], 00:11:09.633 | 99.99th=[ 519] 00:11:09.633 bw ( KiB/s): min= 4096, max= 4096, per=34.07%, avg=4096.00, stdev= 0.00, samples=1 00:11:09.633 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:09.633 lat (usec) : 250=18.74%, 500=68.99%, 750=8.86% 00:11:09.633 lat (msec) : 50=3.41% 00:11:09.633 cpu : usr=0.89%, sys=0.89%, ctx=587, majf=0, minf=2 00:11:09.633 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:09.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.633 issued rwts: total=75,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.633 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:09.633 job2: (groupid=0, jobs=1): err= 0: pid=71810: Fri Jul 26 06:03:20 2024 00:11:09.633 read: IOPS=942, BW=3769KiB/s (3860kB/s)(3852KiB/1022msec) 00:11:09.633 slat (nsec): min=5255, max=67035, avg=17522.53, stdev=10359.46 00:11:09.633 clat (usec): min=305, max=41588, avg=667.58, stdev=3201.58 00:11:09.633 lat (usec): min=312, max=41606, avg=685.11, stdev=3202.74 00:11:09.633 clat percentiles (usec): 00:11:09.633 | 1.00th=[ 314], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 351], 00:11:09.633 | 30.00th=[ 367], 40.00th=[ 388], 50.00th=[ 400], 60.00th=[ 420], 00:11:09.633 | 70.00th=[ 441], 80.00th=[ 478], 90.00th=[ 519], 95.00th=[ 545], 00:11:09.633 | 99.00th=[ 652], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:11:09.633 | 99.99th=[41681] 00:11:09.633 write: IOPS=1001, BW=4008KiB/s (4104kB/s)(4096KiB/1022msec); 0 zone resets 00:11:09.633 slat (nsec): min=6470, max=69488, avg=18456.72, stdev=12142.91 00:11:09.633 clat (usec): min=216, max=1396, avg=324.12, stdev=87.03 00:11:09.633 lat (usec): min=223, max=1407, avg=342.57, stdev=92.27 00:11:09.633 clat percentiles (usec): 00:11:09.633 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 247], 00:11:09.633 | 30.00th=[ 262], 40.00th=[ 285], 50.00th=[ 310], 60.00th=[ 347], 00:11:09.633 | 70.00th=[ 371], 80.00th=[ 396], 90.00th=[ 420], 95.00th=[ 445], 00:11:09.633 | 99.00th=[ 498], 99.50th=[ 537], 99.90th=[ 1106], 99.95th=[ 1401], 00:11:09.633 | 99.99th=[ 1401] 00:11:09.633 bw ( KiB/s): min= 1984, max= 6208, per=34.07%, avg=4096.00, stdev=2986.82, samples=2 00:11:09.633 iops : min= 496, max= 1552, avg=1024.00, stdev=746.70, samples=2 00:11:09.633 lat (usec) : 250=11.12%, 500=81.13%, 750=7.25% 00:11:09.633 lat (msec) : 2=0.20%, 50=0.30% 00:11:09.633 cpu : usr=1.67%, sys=3.92%, ctx=1988, majf=0, minf=1 00:11:09.633 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:09.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.633 issued rwts: total=963,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.633 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:09.633 job3: (groupid=0, jobs=1): err= 0: pid=71811: Fri Jul 26 06:03:20 2024 00:11:09.633 read: IOPS=521, BW=2086KiB/s (2136kB/s)(2088KiB/1001msec) 00:11:09.633 slat (nsec): min=4961, max=68738, avg=19549.31, stdev=11326.22 00:11:09.633 clat (usec): min=337, max=41198, avg=1229.16, stdev=5565.86 00:11:09.633 lat (usec): min=353, max=41206, avg=1248.71, stdev=5566.52 00:11:09.633 clat percentiles (usec): 00:11:09.633 | 1.00th=[ 347], 5.00th=[ 363], 10.00th=[ 375], 20.00th=[ 400], 00:11:09.633 | 30.00th=[ 416], 40.00th=[ 420], 50.00th=[ 429], 60.00th=[ 449], 00:11:09.633 | 70.00th=[ 465], 80.00th=[ 490], 90.00th=[ 545], 95.00th=[ 603], 00:11:09.633 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:09.633 | 99.99th=[41157] 00:11:09.633 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:11:09.633 slat (nsec): min=6597, max=78317, avg=19947.73, stdev=13002.57 00:11:09.633 clat (usec): min=213, max=1157, avg=312.11, stdev=86.44 00:11:09.633 lat (usec): min=220, max=1174, avg=332.05, stdev=91.55 00:11:09.633 clat percentiles (usec): 00:11:09.633 | 1.00th=[ 219], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 245], 00:11:09.633 | 30.00th=[ 253], 40.00th=[ 265], 50.00th=[ 289], 60.00th=[ 310], 00:11:09.633 | 70.00th=[ 347], 80.00th=[ 379], 90.00th=[ 424], 95.00th=[ 461], 00:11:09.633 | 99.00th=[ 537], 99.50th=[ 545], 99.90th=[ 1090], 99.95th=[ 1156], 00:11:09.633 | 99.99th=[ 1156] 00:11:09.633 bw ( KiB/s): min= 5536, max= 5536, per=46.04%, avg=5536.00, stdev= 0.00, samples=1 00:11:09.633 iops : min= 1384, max= 1384, avg=1384.00, stdev= 0.00, samples=1 00:11:09.633 lat (usec) : 250=18.43%, 500=74.45%, 750=6.21%, 1000=0.06% 00:11:09.633 lat (msec) : 2=0.13%, 10=0.06%, 50=0.65% 00:11:09.633 cpu : usr=1.70%, sys=3.00%, ctx=1547, majf=0, minf=1 00:11:09.633 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:09.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.633 issued rwts: total=522,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:09.634 00:11:09.634 Run status group 0 (all jobs): 00:11:09.634 READ: bw=6826KiB/s (6990kB/s), 296KiB/s-3769KiB/s (303kB/s-3860kB/s), io=6976KiB (7143kB), run=1001-1022msec 00:11:09.634 WRITE: bw=11.7MiB/s (12.3MB/s), 2004KiB/s-4092KiB/s (2052kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1022msec 00:11:09.634 00:11:09.634 Disk stats (read/write): 00:11:09.634 nvme0n1: ios=229/512, merge=0/0, ticks=634/194, in_queue=828, util=87.37% 00:11:09.634 nvme0n2: ios=95/512, merge=0/0, ticks=723/135, in_queue=858, util=87.39% 00:11:09.634 nvme0n3: ios=958/1024, merge=0/0, ticks=418/311, in_queue=729, util=88.69% 00:11:09.634 nvme0n4: ios=575/1024, merge=0/0, ticks=1088/305, in_queue=1393, util=97.78% 00:11:09.634 06:03:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:09.634 [global] 00:11:09.634 thread=1 00:11:09.634 invalidate=1 00:11:09.634 rw=write 00:11:09.634 time_based=1 00:11:09.634 runtime=1 00:11:09.634 ioengine=libaio 00:11:09.634 direct=1 00:11:09.634 bs=4096 00:11:09.634 iodepth=128 00:11:09.634 norandommap=0 00:11:09.634 numjobs=1 00:11:09.634 00:11:09.634 verify_dump=1 00:11:09.634 verify_backlog=512 00:11:09.634 verify_state_save=0 00:11:09.634 do_verify=1 00:11:09.634 verify=crc32c-intel 00:11:09.634 [job0] 00:11:09.634 filename=/dev/nvme0n1 00:11:09.634 [job1] 00:11:09.634 filename=/dev/nvme0n2 00:11:09.634 [job2] 00:11:09.634 filename=/dev/nvme0n3 00:11:09.634 [job3] 00:11:09.634 filename=/dev/nvme0n4 00:11:09.634 Could not set queue depth (nvme0n1) 00:11:09.634 Could not set queue depth (nvme0n2) 00:11:09.634 Could not set queue depth (nvme0n3) 00:11:09.634 Could not set queue depth (nvme0n4) 00:11:09.892 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.892 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.892 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.892 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.892 fio-3.35 00:11:09.892 Starting 4 threads 00:11:11.268 00:11:11.268 job0: (groupid=0, jobs=1): err= 0: pid=72041: Fri Jul 26 06:03:22 2024 00:11:11.269 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:11:11.269 slat (usec): min=3, max=12732, avg=125.91, stdev=750.58 00:11:11.269 clat (usec): min=7651, max=37402, avg=16325.70, stdev=4809.75 00:11:11.269 lat (usec): min=7673, max=40349, avg=16451.60, stdev=4870.74 00:11:11.269 clat percentiles (usec): 00:11:11.269 | 1.00th=[ 9634], 5.00th=[11076], 10.00th=[11600], 20.00th=[12911], 00:11:11.269 | 30.00th=[13304], 40.00th=[13829], 50.00th=[15270], 60.00th=[16319], 00:11:11.269 | 70.00th=[17171], 80.00th=[19006], 90.00th=[24249], 95.00th=[26346], 00:11:11.269 | 99.00th=[32900], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:11:11.269 | 99.99th=[37487] 00:11:11.269 write: IOPS=3703, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1007msec); 0 zone resets 00:11:11.269 slat (usec): min=4, max=12312, avg=136.31, stdev=838.97 00:11:11.269 clat (usec): min=6060, max=94491, avg=18489.77, stdev=14519.55 00:11:11.269 lat (usec): min=6084, max=94532, avg=18626.09, stdev=14624.26 00:11:11.269 clat percentiles (usec): 00:11:11.269 | 1.00th=[ 8029], 5.00th=[11207], 10.00th=[11994], 20.00th=[12387], 00:11:11.269 | 30.00th=[12780], 40.00th=[13566], 50.00th=[14222], 60.00th=[16188], 00:11:11.269 | 70.00th=[16581], 80.00th=[17171], 90.00th=[23987], 95.00th=[41681], 00:11:11.269 | 99.00th=[91751], 99.50th=[93848], 99.90th=[94897], 99.95th=[94897], 00:11:11.269 | 99.99th=[94897] 00:11:11.269 bw ( KiB/s): min=12288, max=16528, per=27.00%, avg=14408.00, stdev=2998.13, samples=2 00:11:11.269 iops : min= 3072, max= 4132, avg=3602.00, stdev=749.53, samples=2 00:11:11.269 lat (msec) : 10=2.17%, 20=82.61%, 50=12.84%, 100=2.38% 00:11:11.269 cpu : usr=5.57%, sys=7.46%, ctx=309, majf=0, minf=7 00:11:11.269 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:11.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:11.269 issued rwts: total=3584,3729,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.269 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:11.269 job1: (groupid=0, jobs=1): err= 0: pid=72042: Fri Jul 26 06:03:22 2024 00:11:11.269 read: IOPS=3941, BW=15.4MiB/s (16.1MB/s)(15.6MiB/1011msec) 00:11:11.269 slat (usec): min=3, max=22980, avg=125.06, stdev=926.25 00:11:11.269 clat (usec): min=3142, max=48416, avg=16758.58, stdev=5596.41 00:11:11.269 lat (usec): min=8686, max=48454, avg=16883.64, stdev=5669.11 00:11:11.269 clat percentiles (usec): 00:11:11.269 | 1.00th=[ 8717], 5.00th=[11600], 10.00th=[12256], 20.00th=[12911], 00:11:11.269 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13960], 60.00th=[14615], 00:11:11.269 | 70.00th=[19530], 80.00th=[22414], 90.00th=[24773], 95.00th=[28967], 00:11:11.269 | 99.00th=[33424], 99.50th=[33817], 99.90th=[34866], 99.95th=[42730], 00:11:11.269 | 99.99th=[48497] 00:11:11.269 write: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec); 0 zone resets 00:11:11.269 slat (usec): min=4, max=17392, avg=114.15, stdev=750.36 00:11:11.269 clat (usec): min=7356, max=34395, avg=14782.83, stdev=3728.33 00:11:11.269 lat (usec): min=8063, max=34455, avg=14896.98, stdev=3779.35 00:11:11.269 clat percentiles (usec): 00:11:11.269 | 1.00th=[ 8094], 5.00th=[11076], 10.00th=[12125], 20.00th=[12780], 00:11:11.269 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:11:11.269 | 70.00th=[14615], 80.00th=[16909], 90.00th=[20317], 95.00th=[23200], 00:11:11.269 | 99.00th=[27132], 99.50th=[27395], 99.90th=[28443], 99.95th=[30802], 00:11:11.269 | 99.99th=[34341] 00:11:11.269 bw ( KiB/s): min=14736, max=18032, per=30.71%, avg=16384.00, stdev=2330.62, samples=2 00:11:11.269 iops : min= 3684, max= 4508, avg=4096.00, stdev=582.66, samples=2 00:11:11.269 lat (msec) : 4=0.01%, 10=2.43%, 20=77.24%, 50=20.32% 00:11:11.269 cpu : usr=5.84%, sys=7.33%, ctx=305, majf=0, minf=13 00:11:11.269 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:11.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:11.269 issued rwts: total=3985,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.269 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:11.269 job2: (groupid=0, jobs=1): err= 0: pid=72043: Fri Jul 26 06:03:22 2024 00:11:11.269 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:11:11.269 slat (usec): min=2, max=28374, avg=159.76, stdev=1249.38 00:11:11.269 clat (usec): min=2081, max=93691, avg=20457.47, stdev=10506.51 00:11:11.269 lat (usec): min=2099, max=93700, avg=20617.24, stdev=10588.39 00:11:11.269 clat percentiles (usec): 00:11:11.269 | 1.00th=[ 6194], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[14877], 00:11:11.269 | 30.00th=[16712], 40.00th=[17433], 50.00th=[17957], 60.00th=[18744], 00:11:11.269 | 70.00th=[20579], 80.00th=[22676], 90.00th=[31589], 95.00th=[38011], 00:11:11.269 | 99.00th=[62653], 99.50th=[65274], 99.90th=[65274], 99.95th=[70779], 00:11:11.269 | 99.99th=[93848] 00:11:11.269 write: IOPS=3072, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1009msec); 0 zone resets 00:11:11.269 slat (usec): min=3, max=21804, avg=145.36, stdev=988.60 00:11:11.269 clat (usec): min=776, max=72182, avg=20824.72, stdev=10000.91 00:11:11.269 lat (usec): min=798, max=72190, avg=20970.08, stdev=10054.05 00:11:11.269 clat percentiles (usec): 00:11:11.269 | 1.00th=[ 3687], 5.00th=[ 8586], 10.00th=[12256], 20.00th=[15270], 00:11:11.269 | 30.00th=[16057], 40.00th=[17171], 50.00th=[18482], 60.00th=[19530], 00:11:11.269 | 70.00th=[21890], 80.00th=[24249], 90.00th=[36963], 95.00th=[39060], 00:11:11.269 | 99.00th=[64750], 99.50th=[65274], 99.90th=[65799], 99.95th=[65799], 00:11:11.269 | 99.99th=[71828] 00:11:11.269 bw ( KiB/s): min=10304, max=14272, per=23.03%, avg=12288.00, stdev=2805.80, samples=2 00:11:11.269 iops : min= 2576, max= 3568, avg=3072.00, stdev=701.45, samples=2 00:11:11.269 lat (usec) : 1000=0.02% 00:11:11.269 lat (msec) : 4=0.68%, 10=7.94%, 20=53.86%, 50=34.25%, 100=3.26% 00:11:11.269 cpu : usr=2.58%, sys=4.56%, ctx=326, majf=0, minf=7 00:11:11.269 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:11.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:11.269 issued rwts: total=3072,3100,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.269 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:11.269 job3: (groupid=0, jobs=1): err= 0: pid=72044: Fri Jul 26 06:03:22 2024 00:11:11.269 read: IOPS=2077, BW=8310KiB/s (8509kB/s)(8368KiB/1007msec) 00:11:11.269 slat (usec): min=2, max=30075, avg=251.13, stdev=1783.14 00:11:11.269 clat (usec): min=1644, max=73192, avg=32842.23, stdev=14707.24 00:11:11.269 lat (usec): min=7596, max=73196, avg=33093.35, stdev=14784.49 00:11:11.269 clat percentiles (usec): 00:11:11.269 | 1.00th=[ 7701], 5.00th=[13435], 10.00th=[15795], 20.00th=[17957], 00:11:11.269 | 30.00th=[23462], 40.00th=[27395], 50.00th=[29492], 60.00th=[36963], 00:11:11.269 | 70.00th=[40633], 80.00th=[44303], 90.00th=[53740], 95.00th=[60556], 00:11:11.269 | 99.00th=[72877], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], 00:11:11.269 | 99.99th=[72877] 00:11:11.269 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:11:11.269 slat (usec): min=3, max=18016, avg=181.01, stdev=1127.50 00:11:11.269 clat (usec): min=7941, max=55159, avg=22794.08, stdev=8234.49 00:11:11.269 lat (usec): min=7952, max=55164, avg=22975.09, stdev=8275.39 00:11:11.269 clat percentiles (usec): 00:11:11.269 | 1.00th=[ 8717], 5.00th=[15664], 10.00th=[15926], 20.00th=[16712], 00:11:11.269 | 30.00th=[17433], 40.00th=[19268], 50.00th=[20579], 60.00th=[21627], 00:11:11.269 | 70.00th=[23462], 80.00th=[27132], 90.00th=[33817], 95.00th=[41681], 00:11:11.269 | 99.00th=[47449], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:11:11.269 | 99.99th=[55313] 00:11:11.269 bw ( KiB/s): min= 8192, max=11616, per=18.56%, avg=9904.00, stdev=2421.13, samples=2 00:11:11.269 iops : min= 2048, max= 2904, avg=2476.00, stdev=605.28, samples=2 00:11:11.269 lat (msec) : 2=0.02%, 10=2.13%, 20=31.17%, 50=60.58%, 100=6.10% 00:11:11.269 cpu : usr=1.49%, sys=3.68%, ctx=197, majf=0, minf=23 00:11:11.269 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:11:11.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:11.269 issued rwts: total=2092,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.269 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:11.269 00:11:11.269 Run status group 0 (all jobs): 00:11:11.269 READ: bw=49.2MiB/s (51.6MB/s), 8310KiB/s-15.4MiB/s (8509kB/s-16.1MB/s), io=49.7MiB (52.2MB), run=1007-1011msec 00:11:11.269 WRITE: bw=52.1MiB/s (54.6MB/s), 9.93MiB/s-15.8MiB/s (10.4MB/s-16.6MB/s), io=52.7MiB (55.2MB), run=1007-1011msec 00:11:11.269 00:11:11.269 Disk stats (read/write): 00:11:11.269 nvme0n1: ios=2784/3072, merge=0/0, ticks=24396/27739, in_queue=52135, util=98.10% 00:11:11.269 nvme0n2: ios=3386/3584, merge=0/0, ticks=29705/20169, in_queue=49874, util=99.59% 00:11:11.269 nvme0n3: ios=2618/2979, merge=0/0, ticks=31073/30550, in_queue=61623, util=97.29% 00:11:11.269 nvme0n4: ios=1642/2048, merge=0/0, ticks=21653/19777, in_queue=41430, util=98.00% 00:11:11.270 06:03:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:11.270 [global] 00:11:11.270 thread=1 00:11:11.270 invalidate=1 00:11:11.270 rw=randwrite 00:11:11.270 time_based=1 00:11:11.270 runtime=1 00:11:11.270 ioengine=libaio 00:11:11.270 direct=1 00:11:11.270 bs=4096 00:11:11.270 iodepth=128 00:11:11.270 norandommap=0 00:11:11.270 numjobs=1 00:11:11.270 00:11:11.270 verify_dump=1 00:11:11.270 verify_backlog=512 00:11:11.270 verify_state_save=0 00:11:11.270 do_verify=1 00:11:11.270 verify=crc32c-intel 00:11:11.270 [job0] 00:11:11.270 filename=/dev/nvme0n1 00:11:11.270 [job1] 00:11:11.270 filename=/dev/nvme0n2 00:11:11.270 [job2] 00:11:11.270 filename=/dev/nvme0n3 00:11:11.270 [job3] 00:11:11.270 filename=/dev/nvme0n4 00:11:11.270 Could not set queue depth (nvme0n1) 00:11:11.270 Could not set queue depth (nvme0n2) 00:11:11.270 Could not set queue depth (nvme0n3) 00:11:11.270 Could not set queue depth (nvme0n4) 00:11:11.270 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:11.270 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:11.270 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:11.270 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:11.270 fio-3.35 00:11:11.270 Starting 4 threads 00:11:12.655 00:11:12.655 job0: (groupid=0, jobs=1): err= 0: pid=72299: Fri Jul 26 06:03:23 2024 00:11:12.655 read: IOPS=3354, BW=13.1MiB/s (13.7MB/s)(13.2MiB/1007msec) 00:11:12.655 slat (usec): min=2, max=14182, avg=141.90, stdev=1032.17 00:11:12.655 clat (usec): min=2755, max=53866, avg=17178.06, stdev=6015.42 00:11:12.655 lat (usec): min=6026, max=55612, avg=17319.97, stdev=6108.86 00:11:12.655 clat percentiles (usec): 00:11:12.655 | 1.00th=[ 8455], 5.00th=[11731], 10.00th=[13173], 20.00th=[13960], 00:11:12.655 | 30.00th=[14484], 40.00th=[14877], 50.00th=[15401], 60.00th=[16188], 00:11:12.655 | 70.00th=[17695], 80.00th=[19530], 90.00th=[22938], 95.00th=[28443], 00:11:12.655 | 99.00th=[44303], 99.50th=[49546], 99.90th=[53740], 99.95th=[53740], 00:11:12.655 | 99.99th=[53740] 00:11:12.656 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:11:12.656 slat (usec): min=3, max=12999, avg=122.74, stdev=721.86 00:11:12.656 clat (usec): min=1070, max=65324, avg=19349.84, stdev=10586.36 00:11:12.656 lat (usec): min=1181, max=65332, avg=19472.58, stdev=10658.67 00:11:12.656 clat percentiles (usec): 00:11:12.656 | 1.00th=[ 6325], 5.00th=[ 8160], 10.00th=[ 9372], 20.00th=[12256], 00:11:12.656 | 30.00th=[13042], 40.00th=[14746], 50.00th=[15795], 60.00th=[17957], 00:11:12.656 | 70.00th=[21103], 80.00th=[26608], 90.00th=[30016], 95.00th=[41157], 00:11:12.656 | 99.00th=[60556], 99.50th=[62653], 99.90th=[65274], 99.95th=[65274], 00:11:12.656 | 99.99th=[65274] 00:11:12.656 bw ( KiB/s): min=12288, max=16384, per=27.25%, avg=14336.00, stdev=2896.31, samples=2 00:11:12.656 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:11:12.656 lat (msec) : 2=0.09%, 4=0.19%, 10=8.10%, 20=65.89%, 50=24.10% 00:11:12.656 lat (msec) : 100=1.64% 00:11:12.656 cpu : usr=3.88%, sys=5.86%, ctx=311, majf=0, minf=17 00:11:12.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:12.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:12.656 issued rwts: total=3378,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:12.656 job1: (groupid=0, jobs=1): err= 0: pid=72321: Fri Jul 26 06:03:23 2024 00:11:12.656 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:11:12.656 slat (usec): min=2, max=13208, avg=136.12, stdev=824.36 00:11:12.656 clat (usec): min=4057, max=36948, avg=17660.58, stdev=5955.35 00:11:12.656 lat (usec): min=4063, max=36959, avg=17796.70, stdev=5992.32 00:11:12.656 clat percentiles (usec): 00:11:12.656 | 1.00th=[ 5669], 5.00th=[ 9110], 10.00th=[12649], 20.00th=[13173], 00:11:12.656 | 30.00th=[13829], 40.00th=[15533], 50.00th=[16909], 60.00th=[17957], 00:11:12.656 | 70.00th=[19530], 80.00th=[21103], 90.00th=[26084], 95.00th=[29492], 00:11:12.656 | 99.00th=[33817], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:11:12.656 | 99.99th=[36963] 00:11:12.656 write: IOPS=3221, BW=12.6MiB/s (13.2MB/s)(12.6MiB/1004msec); 0 zone resets 00:11:12.656 slat (usec): min=3, max=17875, avg=171.87, stdev=1161.24 00:11:12.656 clat (usec): min=2809, max=85658, avg=22298.45, stdev=17349.91 00:11:12.656 lat (usec): min=3597, max=85663, avg=22470.32, stdev=17439.94 00:11:12.656 clat percentiles (usec): 00:11:12.656 | 1.00th=[ 5407], 5.00th=[ 8848], 10.00th=[10814], 20.00th=[12780], 00:11:12.656 | 30.00th=[13042], 40.00th=[13698], 50.00th=[14091], 60.00th=[15795], 00:11:12.656 | 70.00th=[19530], 80.00th=[27919], 90.00th=[55313], 95.00th=[66847], 00:11:12.656 | 99.00th=[73925], 99.50th=[81265], 99.90th=[85459], 99.95th=[85459], 00:11:12.656 | 99.99th=[85459] 00:11:12.656 bw ( KiB/s): min= 8920, max=15967, per=23.65%, avg=12443.50, stdev=4982.98, samples=2 00:11:12.656 iops : min= 2230, max= 3991, avg=3110.50, stdev=1245.22, samples=2 00:11:12.656 lat (msec) : 4=0.27%, 10=7.37%, 20=65.11%, 50=21.19%, 100=6.06% 00:11:12.656 cpu : usr=2.09%, sys=4.29%, ctx=267, majf=0, minf=7 00:11:12.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:12.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:12.656 issued rwts: total=3072,3234,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:12.656 job2: (groupid=0, jobs=1): err= 0: pid=72356: Fri Jul 26 06:03:23 2024 00:11:12.656 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:11:12.656 slat (usec): min=2, max=10829, avg=133.13, stdev=736.98 00:11:12.656 clat (usec): min=11302, max=67530, avg=19241.08, stdev=7743.95 00:11:12.656 lat (usec): min=11320, max=67543, avg=19374.21, stdev=7748.92 00:11:12.656 clat percentiles (usec): 00:11:12.656 | 1.00th=[12256], 5.00th=[14222], 10.00th=[15008], 20.00th=[15533], 00:11:12.656 | 30.00th=[15795], 40.00th=[16581], 50.00th=[17171], 60.00th=[17433], 00:11:12.656 | 70.00th=[18744], 80.00th=[20579], 90.00th=[26346], 95.00th=[31327], 00:11:12.656 | 99.00th=[57410], 99.50th=[67634], 99.90th=[67634], 99.95th=[67634], 00:11:12.656 | 99.99th=[67634] 00:11:12.656 write: IOPS=3338, BW=13.0MiB/s (13.7MB/s)(13.1MiB/1005msec); 0 zone resets 00:11:12.656 slat (usec): min=4, max=18548, avg=167.01, stdev=1052.55 00:11:12.656 clat (usec): min=462, max=66771, avg=20221.72, stdev=9400.50 00:11:12.656 lat (usec): min=5213, max=67123, avg=20388.72, stdev=9500.82 00:11:12.656 clat percentiles (usec): 00:11:12.656 | 1.00th=[ 5604], 5.00th=[11731], 10.00th=[13829], 20.00th=[15401], 00:11:12.656 | 30.00th=[15926], 40.00th=[16909], 50.00th=[17695], 60.00th=[18482], 00:11:12.656 | 70.00th=[19792], 80.00th=[22938], 90.00th=[27919], 95.00th=[41157], 00:11:12.656 | 99.00th=[62129], 99.50th=[66847], 99.90th=[66847], 99.95th=[66847], 00:11:12.656 | 99.99th=[66847] 00:11:12.656 bw ( KiB/s): min=10504, max=15312, per=24.53%, avg=12908.00, stdev=3399.77, samples=2 00:11:12.656 iops : min= 2626, max= 3828, avg=3227.00, stdev=849.94, samples=2 00:11:12.656 lat (usec) : 500=0.02% 00:11:12.656 lat (msec) : 10=0.65%, 20=74.54%, 50=22.37%, 100=2.41% 00:11:12.656 cpu : usr=3.88%, sys=7.07%, ctx=272, majf=0, minf=13 00:11:12.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:12.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:12.656 issued rwts: total=3072,3355,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:12.656 job3: (groupid=0, jobs=1): err= 0: pid=72369: Fri Jul 26 06:03:23 2024 00:11:12.656 read: IOPS=2645, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1005msec) 00:11:12.656 slat (usec): min=3, max=16849, avg=159.07, stdev=830.44 00:11:12.656 clat (usec): min=3841, max=68109, avg=21646.97, stdev=10868.07 00:11:12.656 lat (usec): min=7652, max=68125, avg=21806.05, stdev=10905.79 00:11:12.656 clat percentiles (usec): 00:11:12.656 | 1.00th=[ 7963], 5.00th=[13173], 10.00th=[13960], 20.00th=[14877], 00:11:12.656 | 30.00th=[15270], 40.00th=[15795], 50.00th=[16450], 60.00th=[17957], 00:11:12.656 | 70.00th=[25297], 80.00th=[27919], 90.00th=[29492], 95.00th=[46924], 00:11:12.656 | 99.00th=[61604], 99.50th=[63701], 99.90th=[67634], 99.95th=[67634], 00:11:12.656 | 99.99th=[67634] 00:11:12.656 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:11:12.656 slat (usec): min=4, max=24080, avg=175.32, stdev=1179.83 00:11:12.656 clat (usec): min=11406, max=75545, avg=21772.94, stdev=13572.95 00:11:12.656 lat (usec): min=11447, max=75564, avg=21948.25, stdev=13659.49 00:11:12.656 clat percentiles (usec): 00:11:12.656 | 1.00th=[11863], 5.00th=[12649], 10.00th=[13173], 20.00th=[13960], 00:11:12.656 | 30.00th=[14877], 40.00th=[15270], 50.00th=[15926], 60.00th=[16909], 00:11:12.656 | 70.00th=[20841], 80.00th=[22676], 90.00th=[43779], 95.00th=[54264], 00:11:12.656 | 99.00th=[74974], 99.50th=[74974], 99.90th=[76022], 99.95th=[76022], 00:11:12.656 | 99.99th=[76022] 00:11:12.656 bw ( KiB/s): min= 8192, max=16160, per=23.14%, avg=12176.00, stdev=5634.23, samples=2 00:11:12.656 iops : min= 2048, max= 4040, avg=3044.00, stdev=1408.56, samples=2 00:11:12.656 lat (msec) : 4=0.02%, 10=0.56%, 20=63.64%, 50=30.03%, 100=5.76% 00:11:12.656 cpu : usr=3.88%, sys=7.97%, ctx=260, majf=0, minf=15 00:11:12.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:12.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:12.656 issued rwts: total=2659,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:12.656 00:11:12.656 Run status group 0 (all jobs): 00:11:12.656 READ: bw=47.3MiB/s (49.5MB/s), 10.3MiB/s-13.1MiB/s (10.8MB/s-13.7MB/s), io=47.6MiB (49.9MB), run=1004-1007msec 00:11:12.656 WRITE: bw=51.4MiB/s (53.9MB/s), 11.9MiB/s-13.9MiB/s (12.5MB/s-14.6MB/s), io=51.7MiB (54.3MB), run=1004-1007msec 00:11:12.656 00:11:12.656 Disk stats (read/write): 00:11:12.656 nvme0n1: ios=2599/2887, merge=0/0, ticks=42928/55522, in_queue=98450, util=100.00% 00:11:12.656 nvme0n2: ios=2472/2560, merge=0/0, ticks=14559/22425, in_queue=36984, util=86.47% 00:11:12.656 nvme0n3: ios=2697/3072, merge=0/0, ticks=15060/18793, in_queue=33853, util=88.78% 00:11:12.656 nvme0n4: ios=2105/2387, merge=0/0, ticks=12704/15551, in_queue=28255, util=97.35% 00:11:12.656 06:03:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:12.656 06:03:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=72531 00:11:12.656 06:03:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:12.656 06:03:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:12.656 [global] 00:11:12.656 thread=1 00:11:12.656 invalidate=1 00:11:12.656 rw=read 00:11:12.656 time_based=1 00:11:12.656 runtime=10 00:11:12.656 ioengine=libaio 00:11:12.656 direct=1 00:11:12.656 bs=4096 00:11:12.656 iodepth=1 00:11:12.656 norandommap=1 00:11:12.656 numjobs=1 00:11:12.656 00:11:12.656 [job0] 00:11:12.656 filename=/dev/nvme0n1 00:11:12.656 [job1] 00:11:12.656 filename=/dev/nvme0n2 00:11:12.656 [job2] 00:11:12.656 filename=/dev/nvme0n3 00:11:12.656 [job3] 00:11:12.656 filename=/dev/nvme0n4 00:11:12.656 Could not set queue depth (nvme0n1) 00:11:12.656 Could not set queue depth (nvme0n2) 00:11:12.656 Could not set queue depth (nvme0n3) 00:11:12.656 Could not set queue depth (nvme0n4) 00:11:12.656 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.656 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.656 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.656 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.656 fio-3.35 00:11:12.656 Starting 4 threads 00:11:15.941 06:03:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:15.941 06:03:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:15.941 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=22536192, buflen=4096 00:11:15.941 fio: pid=72631, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:15.941 06:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.941 06:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:15.941 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=421888, buflen=4096 00:11:15.941 fio: pid=72630, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:16.200 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=32272384, buflen=4096 00:11:16.200 fio: pid=72628, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:16.461 06:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:16.461 06:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:16.461 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=28889088, buflen=4096 00:11:16.461 fio: pid=72629, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:16.720 06:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:16.720 06:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:16.720 00:11:16.720 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=72628: Fri Jul 26 06:03:27 2024 00:11:16.720 read: IOPS=2280, BW=9119KiB/s (9338kB/s)(30.8MiB/3456msec) 00:11:16.720 slat (usec): min=5, max=18534, avg=18.67, stdev=311.16 00:11:16.720 clat (usec): min=302, max=2304, avg=413.83, stdev=76.59 00:11:16.720 lat (usec): min=308, max=18990, avg=432.50, stdev=323.18 00:11:16.720 clat percentiles (usec): 00:11:16.720 | 1.00th=[ 322], 5.00th=[ 334], 10.00th=[ 343], 20.00th=[ 355], 00:11:16.720 | 30.00th=[ 371], 40.00th=[ 388], 50.00th=[ 400], 60.00th=[ 412], 00:11:16.720 | 70.00th=[ 433], 80.00th=[ 461], 90.00th=[ 498], 95.00th=[ 529], 00:11:16.720 | 99.00th=[ 644], 99.50th=[ 693], 99.90th=[ 1012], 99.95th=[ 1582], 00:11:16.720 | 99.99th=[ 2311] 00:11:16.720 bw ( KiB/s): min= 8520, max= 9560, per=42.33%, avg=9244.00, stdev=406.44, samples=6 00:11:16.720 iops : min= 2130, max= 2390, avg=2311.00, stdev=101.61, samples=6 00:11:16.720 lat (usec) : 500=90.76%, 750=9.01%, 1000=0.11% 00:11:16.720 lat (msec) : 2=0.09%, 4=0.01% 00:11:16.720 cpu : usr=1.36%, sys=4.63%, ctx=7884, majf=0, minf=1 00:11:16.720 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.720 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.720 issued rwts: total=7880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.720 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.720 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=72629: Fri Jul 26 06:03:27 2024 00:11:16.720 read: IOPS=1875, BW=7499KiB/s (7679kB/s)(27.6MiB/3762msec) 00:11:16.720 slat (usec): min=4, max=33043, avg=28.01, stdev=506.17 00:11:16.720 clat (usec): min=304, max=22959, avg=499.85, stdev=299.16 00:11:16.720 lat (usec): min=316, max=33729, avg=527.87, stdev=617.19 00:11:16.720 clat percentiles (usec): 00:11:16.720 | 1.00th=[ 334], 5.00th=[ 359], 10.00th=[ 392], 20.00th=[ 437], 00:11:16.720 | 30.00th=[ 474], 40.00th=[ 490], 50.00th=[ 502], 60.00th=[ 510], 00:11:16.720 | 70.00th=[ 523], 80.00th=[ 537], 90.00th=[ 578], 95.00th=[ 611], 00:11:16.720 | 99.00th=[ 750], 99.50th=[ 857], 99.90th=[ 898], 99.95th=[ 1074], 00:11:16.720 | 99.99th=[22938] 00:11:16.720 bw ( KiB/s): min= 6872, max= 7856, per=34.39%, avg=7510.14, stdev=420.91, samples=7 00:11:16.720 iops : min= 1718, max= 1964, avg=1877.43, stdev=105.40, samples=7 00:11:16.720 lat (usec) : 500=49.82%, 750=49.15%, 1000=0.96% 00:11:16.720 lat (msec) : 2=0.01%, 10=0.03%, 50=0.01% 00:11:16.720 cpu : usr=1.83%, sys=4.25%, ctx=7061, majf=0, minf=1 00:11:16.720 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.720 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.720 issued rwts: total=7054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.720 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.720 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=72630: Fri Jul 26 06:03:27 2024 00:11:16.720 read: IOPS=32, BW=129KiB/s (132kB/s)(412KiB/3206msec) 00:11:16.720 slat (nsec): min=6667, max=36533, avg=17132.00, stdev=7364.08 00:11:16.720 clat (usec): min=382, max=42010, avg=30887.56, stdev=17772.36 00:11:16.720 lat (usec): min=389, max=42025, avg=30904.70, stdev=17775.35 00:11:16.720 clat percentiles (usec): 00:11:16.720 | 1.00th=[ 383], 5.00th=[ 396], 10.00th=[ 420], 20.00th=[ 529], 00:11:16.720 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:16.720 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:11:16.720 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:16.720 | 99.99th=[42206] 00:11:16.720 bw ( KiB/s): min= 96, max= 280, per=0.60%, avg=130.67, stdev=73.26, samples=6 00:11:16.720 iops : min= 24, max= 70, avg=32.67, stdev=18.32, samples=6 00:11:16.720 lat (usec) : 500=19.23%, 750=5.77% 00:11:16.720 lat (msec) : 50=74.04% 00:11:16.720 cpu : usr=0.00%, sys=0.09%, ctx=105, majf=0, minf=1 00:11:16.720 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.720 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.720 issued rwts: total=104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.720 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.720 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=72631: Fri Jul 26 06:03:27 2024 00:11:16.720 read: IOPS=1882, BW=7529KiB/s (7710kB/s)(21.5MiB/2923msec) 00:11:16.720 slat (nsec): min=5312, max=74290, avg=20531.38, stdev=10854.91 00:11:16.720 clat (usec): min=330, max=41664, avg=500.69, stdev=952.36 00:11:16.720 lat (usec): min=335, max=41686, avg=521.23, stdev=952.69 00:11:16.720 clat percentiles (usec): 00:11:16.720 | 1.00th=[ 355], 5.00th=[ 379], 10.00th=[ 392], 20.00th=[ 412], 00:11:16.720 | 30.00th=[ 429], 40.00th=[ 449], 50.00th=[ 474], 60.00th=[ 490], 00:11:16.720 | 70.00th=[ 510], 80.00th=[ 537], 90.00th=[ 570], 95.00th=[ 611], 00:11:16.720 | 99.00th=[ 750], 99.50th=[ 799], 99.90th=[ 914], 99.95th=[40633], 00:11:16.720 | 99.99th=[41681] 00:11:16.720 bw ( KiB/s): min= 6880, max= 8680, per=35.73%, avg=7803.20, stdev=698.26, samples=5 00:11:16.720 iops : min= 1720, max= 2170, avg=1950.80, stdev=174.57, samples=5 00:11:16.720 lat (usec) : 500=65.95%, 750=33.05%, 1000=0.91% 00:11:16.720 lat (msec) : 2=0.02%, 50=0.05% 00:11:16.720 cpu : usr=1.95%, sys=4.18%, ctx=5503, majf=0, minf=1 00:11:16.720 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.720 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.720 issued rwts: total=5503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.720 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.720 00:11:16.720 Run status group 0 (all jobs): 00:11:16.720 READ: bw=21.3MiB/s (22.4MB/s), 129KiB/s-9119KiB/s (132kB/s-9338kB/s), io=80.2MiB (84.1MB), run=2923-3762msec 00:11:16.720 00:11:16.720 Disk stats (read/write): 00:11:16.720 nvme0n1: ios=7651/0, merge=0/0, ticks=3056/0, in_queue=3056, util=94.51% 00:11:16.720 nvme0n2: ios=6743/0, merge=0/0, ticks=3282/0, in_queue=3282, util=94.59% 00:11:16.720 nvme0n3: ios=101/0, merge=0/0, ticks=3101/0, in_queue=3101, util=96.79% 00:11:16.720 nvme0n4: ios=5427/0, merge=0/0, ticks=2526/0, in_queue=2526, util=96.75% 00:11:16.978 06:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:16.978 06:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:17.237 06:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:17.237 06:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:17.495 06:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:17.495 06:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:17.753 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:17.753 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:18.351 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:18.351 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 72531 00:11:18.351 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:18.351 06:03:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:18.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.917 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:18.917 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:18.917 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:18.917 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:19.175 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:19.175 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:19.175 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:19.175 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:19.175 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:19.175 nvmf hotplug test: fio failed as expected 00:11:19.175 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.435 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:19.435 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:19.435 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:19.435 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:19.435 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:19.435 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:19.435 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:19.435 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:19.435 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:19.435 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:19.435 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:19.435 rmmod nvme_tcp 00:11:19.435 rmmod nvme_fabrics 00:11:19.435 rmmod nvme_keyring 00:11:19.435 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:19.435 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:19.435 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:19.435 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 70367 ']' 00:11:19.435 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 70367 00:11:19.435 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 70367 ']' 00:11:19.435 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 70367 00:11:19.435 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:19.435 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:19.435 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70367 00:11:19.435 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:19.435 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:19.435 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70367' 00:11:19.435 killing process with pid 70367 00:11:19.435 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 70367 00:11:19.435 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 70367 00:11:20.807 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:11:20.807 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:20.807 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:20.807 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:20.807 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:20.807 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:20.807 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.807 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.807 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.754 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:22.754 00:11:22.754 real 0m26.403s 00:11:22.754 user 1m31.122s 00:11:22.754 sys 0m7.339s 00:11:22.754 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.754 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.754 ************************************ 00:11:22.754 END TEST nvmf_fio_target 00:11:22.754 ************************************ 00:11:22.754 06:03:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:22.754 06:03:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:22.754 06:03:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:22.754 06:03:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:22.754 ************************************ 00:11:22.754 START TEST nvmf_bdevio 00:11:22.754 ************************************ 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:22.754 * Looking for test storage... 00:11:22.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:11:22.754 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:25.290 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:25.290 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:11:25.290 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:25.290 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:25.290 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:25.290 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:25.290 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:25.290 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:11:25.290 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:25.290 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:11:25.290 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:11:25.290 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:11:25.290 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:11:25.290 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:11:25.290 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:11:25.290 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:25.290 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:25.291 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:25.291 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:25.291 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:25.291 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:25.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:25.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:11:25.291 00:11:25.291 --- 10.0.0.2 ping statistics --- 00:11:25.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.291 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:25.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:25.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:11:25.291 00:11:25.291 --- 10.0.0.1 ping statistics --- 00:11:25.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.291 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:25.291 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:25.292 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:25.292 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:25.292 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:25.292 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:25.292 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:25.292 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:25.292 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:25.292 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:25.292 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:25.292 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=75517 00:11:25.292 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:25.292 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 75517 00:11:25.292 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 75517 ']' 00:11:25.292 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.292 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:25.292 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.292 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:25.292 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:25.292 [2024-07-26 06:03:36.426832] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:25.292 [2024-07-26 06:03:36.427000] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.292 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.292 [2024-07-26 06:03:36.569451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:25.551 [2024-07-26 06:03:36.829430] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.551 [2024-07-26 06:03:36.829506] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.551 [2024-07-26 06:03:36.829534] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.551 [2024-07-26 06:03:36.829557] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.551 [2024-07-26 06:03:36.829579] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.551 [2024-07-26 06:03:36.829729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:25.552 [2024-07-26 06:03:36.830071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:25.552 [2024-07-26 06:03:36.830109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:25.552 [2024-07-26 06:03:36.830120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:26.119 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:26.119 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:26.119 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:26.119 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:26.119 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.119 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.119 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:26.119 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.119 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.119 [2024-07-26 06:03:37.434095] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.119 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.119 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:26.119 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.119 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.379 Malloc0 00:11:26.379 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.379 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:26.379 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.379 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.379 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.379 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:26.379 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.379 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.379 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.379 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.379 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.379 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.379 [2024-07-26 06:03:37.538988] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.379 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.379 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:26.379 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:26.379 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:26.379 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:26.379 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:26.379 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:26.379 { 00:11:26.379 "params": { 00:11:26.379 "name": "Nvme$subsystem", 00:11:26.379 "trtype": "$TEST_TRANSPORT", 00:11:26.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:26.379 "adrfam": "ipv4", 00:11:26.379 "trsvcid": "$NVMF_PORT", 00:11:26.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:26.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:26.379 "hdgst": ${hdgst:-false}, 00:11:26.379 "ddgst": ${ddgst:-false} 00:11:26.379 }, 00:11:26.379 "method": "bdev_nvme_attach_controller" 00:11:26.379 } 00:11:26.379 EOF 00:11:26.379 )") 00:11:26.379 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:26.379 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:26.379 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:26.379 06:03:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:26.379 "params": { 00:11:26.379 "name": "Nvme1", 00:11:26.379 "trtype": "tcp", 00:11:26.379 "traddr": "10.0.0.2", 00:11:26.379 "adrfam": "ipv4", 00:11:26.380 "trsvcid": "4420", 00:11:26.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:26.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:26.380 "hdgst": false, 00:11:26.380 "ddgst": false 00:11:26.380 }, 00:11:26.380 "method": "bdev_nvme_attach_controller" 00:11:26.380 }' 00:11:26.380 [2024-07-26 06:03:37.623457] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:26.380 [2024-07-26 06:03:37.623597] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75677 ] 00:11:26.380 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.638 [2024-07-26 06:03:37.749859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:26.896 [2024-07-26 06:03:37.992988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.896 [2024-07-26 06:03:37.993029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.896 [2024-07-26 06:03:37.993038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.466 I/O targets: 00:11:27.466 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:27.466 00:11:27.466 00:11:27.466 CUnit - A unit testing framework for C - Version 2.1-3 00:11:27.466 http://cunit.sourceforge.net/ 00:11:27.466 00:11:27.466 00:11:27.466 Suite: bdevio tests on: Nvme1n1 00:11:27.466 Test: blockdev write read block ...passed 00:11:27.466 Test: blockdev write zeroes read block ...passed 00:11:27.467 Test: blockdev write zeroes read no split ...passed 00:11:27.467 Test: blockdev write zeroes read split ...passed 00:11:27.467 Test: blockdev write zeroes read split partial ...passed 00:11:27.467 Test: blockdev reset ...[2024-07-26 06:03:38.697875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:27.467 [2024-07-26 06:03:38.698073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:11:27.467 [2024-07-26 06:03:38.713135] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:27.467 passed 00:11:27.467 Test: blockdev write read 8 blocks ...passed 00:11:27.467 Test: blockdev write read size > 128k ...passed 00:11:27.467 Test: blockdev write read invalid size ...passed 00:11:27.467 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:27.467 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:27.467 Test: blockdev write read max offset ...passed 00:11:27.727 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:27.727 Test: blockdev writev readv 8 blocks ...passed 00:11:27.727 Test: blockdev writev readv 30 x 1block ...passed 00:11:27.727 Test: blockdev writev readv block ...passed 00:11:27.727 Test: blockdev writev readv size > 128k ...passed 00:11:27.727 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:27.727 Test: blockdev comparev and writev ...[2024-07-26 06:03:38.930694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.727 [2024-07-26 06:03:38.930773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:27.727 [2024-07-26 06:03:38.930820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.727 [2024-07-26 06:03:38.930850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:27.727 [2024-07-26 06:03:38.931376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.727 [2024-07-26 06:03:38.931412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:27.727 [2024-07-26 06:03:38.931447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.727 [2024-07-26 06:03:38.931480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:27.727 [2024-07-26 06:03:38.931976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.727 [2024-07-26 06:03:38.932010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:27.727 [2024-07-26 06:03:38.932044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.727 [2024-07-26 06:03:38.932090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:27.727 [2024-07-26 06:03:38.932604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.727 [2024-07-26 06:03:38.932639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:27.727 [2024-07-26 06:03:38.932673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.727 [2024-07-26 06:03:38.932699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:27.727 passed 00:11:27.727 Test: blockdev nvme passthru rw ...passed 00:11:27.727 Test: blockdev nvme passthru vendor specific ...[2024-07-26 06:03:39.016597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:27.727 [2024-07-26 06:03:39.016661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:27.727 [2024-07-26 06:03:39.016960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:27.727 [2024-07-26 06:03:39.016993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:27.727 [2024-07-26 06:03:39.017222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:27.727 [2024-07-26 06:03:39.017256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:27.727 [2024-07-26 06:03:39.017484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:27.727 [2024-07-26 06:03:39.017517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:27.727 passed 00:11:27.727 Test: blockdev nvme admin passthru ...passed 00:11:27.985 Test: blockdev copy ...passed 00:11:27.985 00:11:27.985 Run Summary: Type Total Ran Passed Failed Inactive 00:11:27.985 suites 1 1 n/a 0 0 00:11:27.985 tests 23 23 23 0 0 00:11:27.985 asserts 152 152 152 0 n/a 00:11:27.985 00:11:27.985 Elapsed time = 1.133 seconds 00:11:28.923 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:28.923 rmmod nvme_tcp 00:11:28.923 rmmod nvme_fabrics 00:11:28.923 rmmod nvme_keyring 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 75517 ']' 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 75517 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 75517 ']' 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 75517 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75517 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75517' 00:11:28.923 killing process with pid 75517 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 75517 00:11:28.923 06:03:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 75517 00:11:30.301 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:11:30.301 06:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:30.301 06:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:30.301 06:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:30.301 06:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:30.301 06:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:30.301 06:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.301 06:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.301 06:03:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:32.841 00:11:32.841 real 0m9.588s 00:11:32.841 user 0m22.923s 00:11:32.841 sys 0m2.486s 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.841 ************************************ 00:11:32.841 END TEST nvmf_bdevio 00:11:32.841 ************************************ 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:32.841 00:11:32.841 real 4m25.791s 00:11:32.841 user 11m30.033s 00:11:32.841 sys 1m9.930s 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:32.841 ************************************ 00:11:32.841 END TEST nvmf_target_core 00:11:32.841 ************************************ 00:11:32.841 06:03:43 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:32.841 06:03:43 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:32.841 06:03:43 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.841 06:03:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:32.841 ************************************ 00:11:32.841 START TEST nvmf_target_extra 00:11:32.841 ************************************ 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:32.841 * Looking for test storage... 00:11:32.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:32.841 ************************************ 00:11:32.841 START TEST nvmf_example 00:11:32.841 ************************************ 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:32.841 * Looking for test storage... 00:11:32.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.841 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:11:32.842 06:03:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.749 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:34.749 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:11:34.749 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:34.749 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:34.749 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:34.749 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:34.749 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:34.749 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:11:34.749 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:34.749 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:11:34.749 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:11:34.749 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:11:34.749 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:11:34.749 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:11:34.749 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:11:34.749 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:34.750 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:34.750 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:34.750 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:34.750 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:34.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:11:34.750 00:11:34.750 --- 10.0.0.2 ping statistics --- 00:11:34.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.750 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:11:34.750 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:34.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:11:34.750 00:11:34.750 --- 10.0.0.1 ping statistics --- 00:11:34.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.750 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:11:34.751 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.751 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:11:34.751 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:34.751 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.751 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:34.751 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:34.751 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.751 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:34.751 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:34.751 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:34.751 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:34.751 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:34.751 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.751 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:34.751 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:34.751 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=78062 00:11:34.751 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:34.751 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:34.751 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 78062 00:11:34.751 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 78062 ']' 00:11:34.751 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.751 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:34.751 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.751 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:34.751 06:03:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.751 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:35.736 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:35.736 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.945 Initializing NVMe Controllers 00:11:47.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:47.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:47.945 Initialization complete. Launching workers. 00:11:47.945 ======================================================== 00:11:47.945 Latency(us) 00:11:47.945 Device Information : IOPS MiB/s Average min max 00:11:47.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11466.55 44.79 5580.97 1281.85 20108.48 00:11:47.945 ======================================================== 00:11:47.945 Total : 11466.55 44.79 5580.97 1281.85 20108.48 00:11:47.945 00:11:47.945 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:47.945 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:47.945 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:47.945 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:11:47.945 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:47.945 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:11:47.945 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:47.945 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:47.945 rmmod nvme_tcp 00:11:47.945 rmmod nvme_fabrics 00:11:47.945 rmmod nvme_keyring 00:11:47.945 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:47.945 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:11:47.945 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:11:47.945 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 78062 ']' 00:11:47.945 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 78062 00:11:47.945 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 78062 ']' 00:11:47.945 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 78062 00:11:47.945 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:47.945 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:47.945 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78062 00:11:47.945 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:47.945 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:47.946 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78062' 00:11:47.946 killing process with pid 78062 00:11:47.946 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 78062 00:11:47.946 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 78062 00:11:47.946 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:11:47.946 nvmf threads initialize successfully 00:11:47.946 bdev subsystem init successfully 00:11:47.946 created a nvmf target service 00:11:47.946 create targets's poll groups done 00:11:47.946 all subsystems of target started 00:11:47.946 nvmf target is running 00:11:47.946 all subsystems of target stopped 00:11:47.946 destroy targets's poll groups done 00:11:47.946 destroyed the nvmf target service 00:11:47.946 bdev subsystem finish successfully 00:11:47.946 nvmf threads destroy successfully 00:11:47.946 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:47.946 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:47.946 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:47.946 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:47.946 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:47.946 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.946 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.946 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.325 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:49.325 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:49.326 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:49.326 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:49.326 00:11:49.326 real 0m16.838s 00:11:49.326 user 0m47.544s 00:11:49.326 sys 0m3.134s 00:11:49.326 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.326 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:49.326 ************************************ 00:11:49.326 END TEST nvmf_example 00:11:49.326 ************************************ 00:11:49.326 06:04:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:49.326 06:04:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:49.326 06:04:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:49.326 06:04:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:49.326 ************************************ 00:11:49.326 START TEST nvmf_filesystem 00:11:49.326 ************************************ 00:11:49.326 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:49.588 * Looking for test storage... 00:11:49.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.588 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:49.588 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:49.588 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:49.588 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:49.588 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:49.588 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:49.588 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:49.588 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:49.589 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:49.589 #define SPDK_CONFIG_H 00:11:49.589 #define SPDK_CONFIG_APPS 1 00:11:49.589 #define SPDK_CONFIG_ARCH native 00:11:49.589 #define SPDK_CONFIG_ASAN 1 00:11:49.590 #undef SPDK_CONFIG_AVAHI 00:11:49.590 #undef SPDK_CONFIG_CET 00:11:49.590 #define SPDK_CONFIG_COVERAGE 1 00:11:49.590 #define SPDK_CONFIG_CROSS_PREFIX 00:11:49.590 #undef SPDK_CONFIG_CRYPTO 00:11:49.590 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:49.590 #undef SPDK_CONFIG_CUSTOMOCF 00:11:49.590 #undef SPDK_CONFIG_DAOS 00:11:49.590 #define SPDK_CONFIG_DAOS_DIR 00:11:49.590 #define SPDK_CONFIG_DEBUG 1 00:11:49.590 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:49.590 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:49.590 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:49.590 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:49.590 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:49.590 #undef SPDK_CONFIG_DPDK_UADK 00:11:49.590 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:49.590 #define SPDK_CONFIG_EXAMPLES 1 00:11:49.590 #undef SPDK_CONFIG_FC 00:11:49.590 #define SPDK_CONFIG_FC_PATH 00:11:49.590 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:49.590 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:49.590 #undef SPDK_CONFIG_FUSE 00:11:49.590 #undef SPDK_CONFIG_FUZZER 00:11:49.590 #define SPDK_CONFIG_FUZZER_LIB 00:11:49.590 #undef SPDK_CONFIG_GOLANG 00:11:49.590 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:49.590 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:49.590 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:49.590 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:49.590 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:49.590 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:49.590 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:49.590 #define SPDK_CONFIG_IDXD 1 00:11:49.590 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:49.590 #undef SPDK_CONFIG_IPSEC_MB 00:11:49.590 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:49.590 #define SPDK_CONFIG_ISAL 1 00:11:49.590 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:49.590 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:49.590 #define SPDK_CONFIG_LIBDIR 00:11:49.590 #undef SPDK_CONFIG_LTO 00:11:49.590 #define SPDK_CONFIG_MAX_LCORES 128 00:11:49.590 #define SPDK_CONFIG_NVME_CUSE 1 00:11:49.590 #undef SPDK_CONFIG_OCF 00:11:49.590 #define SPDK_CONFIG_OCF_PATH 00:11:49.590 #define SPDK_CONFIG_OPENSSL_PATH 00:11:49.590 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:49.590 #define SPDK_CONFIG_PGO_DIR 00:11:49.590 #undef SPDK_CONFIG_PGO_USE 00:11:49.590 #define SPDK_CONFIG_PREFIX /usr/local 00:11:49.590 #undef SPDK_CONFIG_RAID5F 00:11:49.590 #undef SPDK_CONFIG_RBD 00:11:49.590 #define SPDK_CONFIG_RDMA 1 00:11:49.590 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:49.590 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:49.590 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:49.590 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:49.590 #define SPDK_CONFIG_SHARED 1 00:11:49.590 #undef SPDK_CONFIG_SMA 00:11:49.590 #define SPDK_CONFIG_TESTS 1 00:11:49.590 #undef SPDK_CONFIG_TSAN 00:11:49.590 #define SPDK_CONFIG_UBLK 1 00:11:49.590 #define SPDK_CONFIG_UBSAN 1 00:11:49.590 #undef SPDK_CONFIG_UNIT_TESTS 00:11:49.590 #undef SPDK_CONFIG_URING 00:11:49.590 #define SPDK_CONFIG_URING_PATH 00:11:49.590 #undef SPDK_CONFIG_URING_ZNS 00:11:49.590 #undef SPDK_CONFIG_USDT 00:11:49.590 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:49.590 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:49.590 #undef SPDK_CONFIG_VFIO_USER 00:11:49.590 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:49.590 #define SPDK_CONFIG_VHOST 1 00:11:49.590 #define SPDK_CONFIG_VIRTIO 1 00:11:49.590 #undef SPDK_CONFIG_VTUNE 00:11:49.590 #define SPDK_CONFIG_VTUNE_DIR 00:11:49.590 #define SPDK_CONFIG_WERROR 1 00:11:49.590 #define SPDK_CONFIG_WPDK_DIR 00:11:49.590 #undef SPDK_CONFIG_XNVME 00:11:49.590 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:49.590 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:49.590 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:49.590 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.590 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.590 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.590 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.590 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.590 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.590 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:49.590 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.590 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:49.590 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:49.590 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:49.590 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:49.590 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:49.590 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:49.590 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:49.590 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:49.590 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:49.590 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:49.590 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:49.590 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:49.590 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:49.590 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:49.590 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 1 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:49.591 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:11:49.592 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j48 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 79898 ]] 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 79898 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.cFe3Di 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.cFe3Di/tests/target /tmp/spdk.cFe3Di 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=953643008 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4330786816 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=55262121984 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=61994713088 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=6732591104 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30986100736 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997356544 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=11255808 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12376535040 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12398944256 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=22409216 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30996578304 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997356544 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=778240 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6199463936 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6199468032 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:11:49.593 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:11:49.594 * Looking for test storage... 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=55262121984 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=8947183616 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.594 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:49.595 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.595 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:11:49.595 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:49.595 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:49.595 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:49.595 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.595 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.595 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:49.595 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:49.595 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:49.595 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:49.595 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:49.595 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:49.595 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:49.595 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.595 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:49.595 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:49.595 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:49.595 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.595 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.595 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.595 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:49.595 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:49.595 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:49.595 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:51.497 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:51.497 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:51.497 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:51.498 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:51.498 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:51.498 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:51.756 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:51.756 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:51.756 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:51.756 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:51.756 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:51.756 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:51.756 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:51.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:51.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:11:51.756 00:11:51.756 --- 10.0.0.2 ping statistics --- 00:11:51.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.756 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:11:51.756 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:51.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:11:51.756 00:11:51.756 --- 10.0.0.1 ping statistics --- 00:11:51.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.756 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:11:51.756 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.756 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:11:51.756 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:51.756 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.756 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:51.756 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:51.756 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.756 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:51.756 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:51.756 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:51.756 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:51.756 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:51.757 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:51.757 ************************************ 00:11:51.757 START TEST nvmf_filesystem_no_in_capsule 00:11:51.757 ************************************ 00:11:51.757 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:51.757 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:51.757 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:51.757 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:51.757 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:51.757 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.757 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=81522 00:11:51.757 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:51.757 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 81522 00:11:51.757 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 81522 ']' 00:11:51.757 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.757 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:51.757 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.757 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:51.757 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.757 [2024-07-26 06:04:03.047234] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:51.757 [2024-07-26 06:04:03.047376] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.016 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.016 [2024-07-26 06:04:03.212672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.275 [2024-07-26 06:04:03.461708] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.275 [2024-07-26 06:04:03.461784] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.275 [2024-07-26 06:04:03.461806] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.275 [2024-07-26 06:04:03.461823] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.275 [2024-07-26 06:04:03.461841] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.275 [2024-07-26 06:04:03.461970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.275 [2024-07-26 06:04:03.462034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.276 [2024-07-26 06:04:03.462122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.276 [2024-07-26 06:04:03.462129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.841 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:52.841 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:52.841 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:52.841 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:52.841 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.841 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.841 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:52.841 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:52.841 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.841 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.841 [2024-07-26 06:04:04.099827] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.842 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.842 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:52.842 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.842 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.407 Malloc1 00:11:53.407 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.407 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:53.407 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.407 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.407 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.407 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:53.407 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.407 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.407 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.407 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.407 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.407 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.407 [2024-07-26 06:04:04.690538] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.407 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.407 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:53.407 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:53.407 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:53.407 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:53.407 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:53.407 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:53.407 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.407 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.408 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.408 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:53.408 { 00:11:53.408 "name": "Malloc1", 00:11:53.408 "aliases": [ 00:11:53.408 "bfcc0567-8b19-4ee6-8611-8734170375ef" 00:11:53.408 ], 00:11:53.408 "product_name": "Malloc disk", 00:11:53.408 "block_size": 512, 00:11:53.408 "num_blocks": 1048576, 00:11:53.408 "uuid": "bfcc0567-8b19-4ee6-8611-8734170375ef", 00:11:53.408 "assigned_rate_limits": { 00:11:53.408 "rw_ios_per_sec": 0, 00:11:53.408 "rw_mbytes_per_sec": 0, 00:11:53.408 "r_mbytes_per_sec": 0, 00:11:53.408 "w_mbytes_per_sec": 0 00:11:53.408 }, 00:11:53.408 "claimed": true, 00:11:53.408 "claim_type": "exclusive_write", 00:11:53.408 "zoned": false, 00:11:53.408 "supported_io_types": { 00:11:53.408 "read": true, 00:11:53.408 "write": true, 00:11:53.408 "unmap": true, 00:11:53.408 "flush": true, 00:11:53.408 "reset": true, 00:11:53.408 "nvme_admin": false, 00:11:53.408 "nvme_io": false, 00:11:53.408 "nvme_io_md": false, 00:11:53.408 "write_zeroes": true, 00:11:53.408 "zcopy": true, 00:11:53.408 "get_zone_info": false, 00:11:53.408 "zone_management": false, 00:11:53.408 "zone_append": false, 00:11:53.408 "compare": false, 00:11:53.408 "compare_and_write": false, 00:11:53.408 "abort": true, 00:11:53.408 "seek_hole": false, 00:11:53.408 "seek_data": false, 00:11:53.408 "copy": true, 00:11:53.408 "nvme_iov_md": false 00:11:53.408 }, 00:11:53.408 "memory_domains": [ 00:11:53.408 { 00:11:53.408 "dma_device_id": "system", 00:11:53.408 "dma_device_type": 1 00:11:53.408 }, 00:11:53.408 { 00:11:53.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.408 "dma_device_type": 2 00:11:53.408 } 00:11:53.408 ], 00:11:53.408 "driver_specific": {} 00:11:53.408 } 00:11:53.408 ]' 00:11:53.408 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:53.667 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:53.667 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:53.667 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:53.667 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:53.667 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:53.667 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:53.667 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.236 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.236 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:54.236 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.236 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:54.236 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:56.140 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:56.140 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:56.140 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.140 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:56.140 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.140 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:56.140 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:56.140 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:56.140 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:56.140 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:56.140 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:56.140 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:56.140 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:56.140 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:56.140 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:56.140 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:56.140 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:56.398 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:57.333 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:58.271 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:58.271 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:58.271 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:58.271 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.271 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.271 ************************************ 00:11:58.271 START TEST filesystem_ext4 00:11:58.271 ************************************ 00:11:58.271 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:58.271 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:58.271 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:58.271 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:58.271 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:58.271 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:58.271 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:58.271 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:58.271 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:58.271 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:58.271 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:58.271 mke2fs 1.46.5 (30-Dec-2021) 00:11:58.529 Discarding device blocks: 0/522240 done 00:11:58.529 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:58.529 Filesystem UUID: 51c81f7a-883b-432f-84aa-873af61ebce7 00:11:58.529 Superblock backups stored on blocks: 00:11:58.529 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:58.529 00:11:58.529 Allocating group tables: 0/64 done 00:11:58.529 Writing inode tables: 0/64 done 00:11:58.800 Creating journal (8192 blocks): done 00:11:59.770 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:11:59.770 00:11:59.770 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:59.770 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:00.028 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 81522 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:00.288 00:12:00.288 real 0m1.851s 00:12:00.288 user 0m0.020s 00:12:00.288 sys 0m0.050s 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:00.288 ************************************ 00:12:00.288 END TEST filesystem_ext4 00:12:00.288 ************************************ 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.288 ************************************ 00:12:00.288 START TEST filesystem_btrfs 00:12:00.288 ************************************ 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:00.288 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:00.548 btrfs-progs v6.6.2 00:12:00.548 See https://btrfs.readthedocs.io for more information. 00:12:00.548 00:12:00.548 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:00.548 NOTE: several default settings have changed in version 5.15, please make sure 00:12:00.548 this does not affect your deployments: 00:12:00.548 - DUP for metadata (-m dup) 00:12:00.548 - enabled no-holes (-O no-holes) 00:12:00.548 - enabled free-space-tree (-R free-space-tree) 00:12:00.549 00:12:00.549 Label: (null) 00:12:00.549 UUID: 776834e2-f5e3-4022-85fe-1cbc9dedfc70 00:12:00.549 Node size: 16384 00:12:00.549 Sector size: 4096 00:12:00.549 Filesystem size: 510.00MiB 00:12:00.549 Block group profiles: 00:12:00.549 Data: single 8.00MiB 00:12:00.549 Metadata: DUP 32.00MiB 00:12:00.549 System: DUP 8.00MiB 00:12:00.549 SSD detected: yes 00:12:00.549 Zoned device: no 00:12:00.549 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:00.549 Runtime features: free-space-tree 00:12:00.549 Checksum: crc32c 00:12:00.549 Number of devices: 1 00:12:00.549 Devices: 00:12:00.549 ID SIZE PATH 00:12:00.549 1 510.00MiB /dev/nvme0n1p1 00:12:00.549 00:12:00.549 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:00.549 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:01.486 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:01.486 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:01.486 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:01.486 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:01.486 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:01.486 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:01.745 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 81522 00:12:01.745 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:01.745 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:01.745 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:01.746 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:01.746 00:12:01.746 real 0m1.361s 00:12:01.746 user 0m0.020s 00:12:01.746 sys 0m0.110s 00:12:01.746 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:01.746 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:01.746 ************************************ 00:12:01.746 END TEST filesystem_btrfs 00:12:01.746 ************************************ 00:12:01.746 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:01.746 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:01.746 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:01.746 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.746 ************************************ 00:12:01.746 START TEST filesystem_xfs 00:12:01.746 ************************************ 00:12:01.746 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:01.746 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:01.746 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:01.746 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:01.746 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:01.746 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:01.746 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:01.746 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:12:01.746 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:01.746 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:01.746 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:01.746 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:01.746 = sectsz=512 attr=2, projid32bit=1 00:12:01.746 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:01.746 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:01.746 data = bsize=4096 blocks=130560, imaxpct=25 00:12:01.746 = sunit=0 swidth=0 blks 00:12:01.746 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:01.746 log =internal log bsize=4096 blocks=16384, version=2 00:12:01.746 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:01.746 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:02.683 Discarding blocks...Done. 00:12:02.683 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:02.683 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 81522 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:04.587 00:12:04.587 real 0m2.724s 00:12:04.587 user 0m0.022s 00:12:04.587 sys 0m0.048s 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:04.587 ************************************ 00:12:04.587 END TEST filesystem_xfs 00:12:04.587 ************************************ 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:04.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 81522 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 81522 ']' 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 81522 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81522 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81522' 00:12:04.587 killing process with pid 81522 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 81522 00:12:04.587 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 81522 00:12:07.122 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:12:07.122 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:07.122 00:12:07.122 real 0m15.414s 00:12:07.122 user 0m57.120s 00:12:07.122 sys 0m2.049s 00:12:07.122 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.122 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.122 ************************************ 00:12:07.122 END TEST nvmf_filesystem_no_in_capsule 00:12:07.122 ************************************ 00:12:07.122 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:07.122 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:07.122 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.122 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:07.122 ************************************ 00:12:07.122 START TEST nvmf_filesystem_in_capsule 00:12:07.122 ************************************ 00:12:07.122 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:12:07.122 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:07.122 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:07.122 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:07.122 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:07.122 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.122 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=83591 00:12:07.122 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.122 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 83591 00:12:07.122 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 83591 ']' 00:12:07.122 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.122 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:07.122 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.122 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:07.122 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.382 [2024-07-26 06:04:18.523523] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:07.382 [2024-07-26 06:04:18.523664] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.382 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.382 [2024-07-26 06:04:18.660826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.641 [2024-07-26 06:04:18.921686] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.641 [2024-07-26 06:04:18.921769] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.641 [2024-07-26 06:04:18.921798] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.641 [2024-07-26 06:04:18.921821] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.641 [2024-07-26 06:04:18.921844] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.641 [2024-07-26 06:04:18.921974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.641 [2024-07-26 06:04:18.922046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.641 [2024-07-26 06:04:18.922141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.641 [2024-07-26 06:04:18.922149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.207 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:08.207 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:08.207 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:08.207 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:08.207 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.207 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.207 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:08.207 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:08.207 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.207 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.207 [2024-07-26 06:04:19.507004] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.207 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.207 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:08.207 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.207 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.774 Malloc1 00:12:08.774 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.774 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:08.774 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.774 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.774 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.774 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:08.774 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.774 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.774 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.774 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.774 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.774 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.774 [2024-07-26 06:04:20.086465] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.774 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.774 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:08.774 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:08.774 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:08.774 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:08.774 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:08.774 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:08.774 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.774 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.774 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.032 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:09.032 { 00:12:09.032 "name": "Malloc1", 00:12:09.032 "aliases": [ 00:12:09.032 "342fe093-566a-42e8-a441-76d335ee8cba" 00:12:09.032 ], 00:12:09.032 "product_name": "Malloc disk", 00:12:09.032 "block_size": 512, 00:12:09.032 "num_blocks": 1048576, 00:12:09.032 "uuid": "342fe093-566a-42e8-a441-76d335ee8cba", 00:12:09.032 "assigned_rate_limits": { 00:12:09.032 "rw_ios_per_sec": 0, 00:12:09.032 "rw_mbytes_per_sec": 0, 00:12:09.032 "r_mbytes_per_sec": 0, 00:12:09.032 "w_mbytes_per_sec": 0 00:12:09.032 }, 00:12:09.032 "claimed": true, 00:12:09.032 "claim_type": "exclusive_write", 00:12:09.032 "zoned": false, 00:12:09.032 "supported_io_types": { 00:12:09.032 "read": true, 00:12:09.032 "write": true, 00:12:09.032 "unmap": true, 00:12:09.032 "flush": true, 00:12:09.032 "reset": true, 00:12:09.032 "nvme_admin": false, 00:12:09.032 "nvme_io": false, 00:12:09.032 "nvme_io_md": false, 00:12:09.032 "write_zeroes": true, 00:12:09.032 "zcopy": true, 00:12:09.032 "get_zone_info": false, 00:12:09.032 "zone_management": false, 00:12:09.032 "zone_append": false, 00:12:09.032 "compare": false, 00:12:09.032 "compare_and_write": false, 00:12:09.032 "abort": true, 00:12:09.032 "seek_hole": false, 00:12:09.032 "seek_data": false, 00:12:09.032 "copy": true, 00:12:09.033 "nvme_iov_md": false 00:12:09.033 }, 00:12:09.033 "memory_domains": [ 00:12:09.033 { 00:12:09.033 "dma_device_id": "system", 00:12:09.033 "dma_device_type": 1 00:12:09.033 }, 00:12:09.033 { 00:12:09.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.033 "dma_device_type": 2 00:12:09.033 } 00:12:09.033 ], 00:12:09.033 "driver_specific": {} 00:12:09.033 } 00:12:09.033 ]' 00:12:09.033 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:09.033 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:09.033 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:09.033 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:09.033 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:09.033 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:09.033 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:09.033 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:09.603 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:09.603 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:09.603 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:09.603 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:09.603 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:12.140 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:12.140 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:12.140 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:12.140 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:12.140 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:12.140 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:12.140 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:12.140 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:12.140 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:12.140 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:12.140 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:12.140 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:12.140 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:12.140 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:12.140 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:12.140 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:12.140 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:12.140 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:12.398 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:13.775 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:13.775 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:13.775 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:13.775 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:13.775 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.775 ************************************ 00:12:13.775 START TEST filesystem_in_capsule_ext4 00:12:13.775 ************************************ 00:12:13.775 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:13.775 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:13.775 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:13.775 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:13.775 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:13.775 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:13.775 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:13.775 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:13.775 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:13.775 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:13.775 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:13.775 mke2fs 1.46.5 (30-Dec-2021) 00:12:13.775 Discarding device blocks: 0/522240 done 00:12:13.775 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:13.775 Filesystem UUID: 95232f91-04df-49b6-ac4c-89467c7e30f9 00:12:13.775 Superblock backups stored on blocks: 00:12:13.775 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:13.775 00:12:13.775 Allocating group tables: 0/64 done 00:12:13.775 Writing inode tables: 0/64 done 00:12:16.311 Creating journal (8192 blocks): done 00:12:16.311 Writing superblocks and filesystem accounting information: 0/64 done 00:12:16.311 00:12:16.311 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:16.311 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 83591 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:17.275 00:12:17.275 real 0m3.801s 00:12:17.275 user 0m0.017s 00:12:17.275 sys 0m0.057s 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:17.275 ************************************ 00:12:17.275 END TEST filesystem_in_capsule_ext4 00:12:17.275 ************************************ 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.275 ************************************ 00:12:17.275 START TEST filesystem_in_capsule_btrfs 00:12:17.275 ************************************ 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:17.275 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:17.535 btrfs-progs v6.6.2 00:12:17.535 See https://btrfs.readthedocs.io for more information. 00:12:17.535 00:12:17.535 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:17.535 NOTE: several default settings have changed in version 5.15, please make sure 00:12:17.535 this does not affect your deployments: 00:12:17.535 - DUP for metadata (-m dup) 00:12:17.535 - enabled no-holes (-O no-holes) 00:12:17.535 - enabled free-space-tree (-R free-space-tree) 00:12:17.535 00:12:17.535 Label: (null) 00:12:17.535 UUID: ce9c6565-bac6-458d-9f00-9bbaabd95df2 00:12:17.535 Node size: 16384 00:12:17.535 Sector size: 4096 00:12:17.535 Filesystem size: 510.00MiB 00:12:17.535 Block group profiles: 00:12:17.535 Data: single 8.00MiB 00:12:17.535 Metadata: DUP 32.00MiB 00:12:17.535 System: DUP 8.00MiB 00:12:17.535 SSD detected: yes 00:12:17.535 Zoned device: no 00:12:17.535 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:17.535 Runtime features: free-space-tree 00:12:17.535 Checksum: crc32c 00:12:17.535 Number of devices: 1 00:12:17.535 Devices: 00:12:17.535 ID SIZE PATH 00:12:17.535 1 510.00MiB /dev/nvme0n1p1 00:12:17.535 00:12:17.535 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:17.535 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:18.472 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:18.472 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:18.472 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:18.472 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:18.472 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:18.472 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:18.473 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 83591 00:12:18.473 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:18.473 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:18.473 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:18.473 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:18.473 00:12:18.473 real 0m1.154s 00:12:18.473 user 0m0.020s 00:12:18.473 sys 0m0.108s 00:12:18.473 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:18.473 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:18.473 ************************************ 00:12:18.473 END TEST filesystem_in_capsule_btrfs 00:12:18.473 ************************************ 00:12:18.473 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:18.473 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:18.473 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:18.473 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.473 ************************************ 00:12:18.473 START TEST filesystem_in_capsule_xfs 00:12:18.473 ************************************ 00:12:18.473 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:18.473 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:18.473 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:18.473 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:18.473 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:18.473 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:18.473 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:18.473 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:18.473 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:18.473 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:18.473 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:18.733 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:18.733 = sectsz=512 attr=2, projid32bit=1 00:12:18.733 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:18.733 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:18.733 data = bsize=4096 blocks=130560, imaxpct=25 00:12:18.733 = sunit=0 swidth=0 blks 00:12:18.733 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:18.733 log =internal log bsize=4096 blocks=16384, version=2 00:12:18.733 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:18.733 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:19.301 Discarding blocks...Done. 00:12:19.301 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:19.301 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:21.203 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:21.203 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:21.203 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:21.203 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:21.203 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:21.203 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:21.203 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 83591 00:12:21.203 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:21.203 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:21.203 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:21.203 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:21.203 00:12:21.203 real 0m2.638s 00:12:21.203 user 0m0.018s 00:12:21.203 sys 0m0.055s 00:12:21.203 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.203 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:21.203 ************************************ 00:12:21.203 END TEST filesystem_in_capsule_xfs 00:12:21.203 ************************************ 00:12:21.203 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:21.461 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:21.461 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:21.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.721 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:21.721 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:21.721 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:21.721 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.721 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:21.721 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.721 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:21.721 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.721 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.721 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.721 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.721 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:21.721 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 83591 00:12:21.721 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 83591 ']' 00:12:21.721 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 83591 00:12:21.721 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:21.721 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:21.721 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83591 00:12:21.721 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:21.721 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:21.721 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83591' 00:12:21.721 killing process with pid 83591 00:12:21.721 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 83591 00:12:21.721 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 83591 00:12:24.252 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:12:24.252 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:24.252 00:12:24.252 real 0m17.056s 00:12:24.252 user 1m3.506s 00:12:24.252 sys 0m2.186s 00:12:24.252 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:24.252 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.252 ************************************ 00:12:24.252 END TEST nvmf_filesystem_in_capsule 00:12:24.252 ************************************ 00:12:24.252 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:24.252 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:24.252 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:12:24.252 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:24.252 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:12:24.252 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:24.252 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:24.252 rmmod nvme_tcp 00:12:24.252 rmmod nvme_fabrics 00:12:24.252 rmmod nvme_keyring 00:12:24.252 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:24.252 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:12:24.252 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:12:24.252 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:24.252 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:24.252 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:24.252 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:24.252 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:24.252 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:24.252 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.252 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.252 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.787 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:26.787 00:12:26.787 real 0m36.984s 00:12:26.787 user 2m1.555s 00:12:26.787 sys 0m5.808s 00:12:26.787 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:26.787 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:26.787 ************************************ 00:12:26.787 END TEST nvmf_filesystem 00:12:26.787 ************************************ 00:12:26.787 06:04:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:26.787 06:04:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:26.787 06:04:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:26.787 06:04:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:26.787 ************************************ 00:12:26.787 START TEST nvmf_target_discovery 00:12:26.787 ************************************ 00:12:26.787 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:26.788 * Looking for test storage... 00:12:26.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:12:26.788 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:28.694 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:28.694 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:28.695 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:28.695 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:28.695 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:28.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:12:28.695 00:12:28.695 --- 10.0.0.2 ping statistics --- 00:12:28.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.695 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:28.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:12:28.695 00:12:28.695 --- 10.0.0.1 ping statistics --- 00:12:28.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.695 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.695 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:28.696 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:28.696 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.696 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:28.696 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:28.696 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:28.696 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:28.696 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:28.696 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.696 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=87730 00:12:28.696 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:28.696 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 87730 00:12:28.696 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 87730 ']' 00:12:28.696 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.696 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:28.696 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.696 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:28.696 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.696 [2024-07-26 06:04:39.998839] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:28.696 [2024-07-26 06:04:39.998981] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.955 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.955 [2024-07-26 06:04:40.135796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:29.214 [2024-07-26 06:04:40.395429] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.214 [2024-07-26 06:04:40.395513] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.214 [2024-07-26 06:04:40.395541] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.214 [2024-07-26 06:04:40.395561] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.214 [2024-07-26 06:04:40.395583] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.214 [2024-07-26 06:04:40.395711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.214 [2024-07-26 06:04:40.395778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.214 [2024-07-26 06:04:40.395871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.214 [2024-07-26 06:04:40.395881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:29.782 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:29.782 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:29.783 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:29.783 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:29.783 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.783 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.783 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:29.783 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.783 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.783 [2024-07-26 06:04:40.963952] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.783 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.783 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:29.783 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:29.783 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:29.783 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.783 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.783 Null1 00:12:29.783 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.783 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:29.783 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.783 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.783 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.783 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:29.783 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.783 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.783 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.783 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.783 [2024-07-26 06:04:41.005089] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.783 Null2 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.783 Null3 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.783 Null4 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:29.783 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.784 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.784 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.784 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:29.784 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.784 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.784 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.784 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:29.784 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.784 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.784 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.784 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:29.784 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.784 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.784 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.784 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:29.784 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.784 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:30.045 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.045 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:30.045 00:12:30.045 Discovery Log Number of Records 6, Generation counter 6 00:12:30.045 =====Discovery Log Entry 0====== 00:12:30.045 trtype: tcp 00:12:30.045 adrfam: ipv4 00:12:30.045 subtype: current discovery subsystem 00:12:30.045 treq: not required 00:12:30.045 portid: 0 00:12:30.045 trsvcid: 4420 00:12:30.045 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:30.045 traddr: 10.0.0.2 00:12:30.045 eflags: explicit discovery connections, duplicate discovery information 00:12:30.045 sectype: none 00:12:30.045 =====Discovery Log Entry 1====== 00:12:30.045 trtype: tcp 00:12:30.045 adrfam: ipv4 00:12:30.045 subtype: nvme subsystem 00:12:30.045 treq: not required 00:12:30.045 portid: 0 00:12:30.045 trsvcid: 4420 00:12:30.045 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:30.045 traddr: 10.0.0.2 00:12:30.045 eflags: none 00:12:30.045 sectype: none 00:12:30.045 =====Discovery Log Entry 2====== 00:12:30.045 trtype: tcp 00:12:30.045 adrfam: ipv4 00:12:30.045 subtype: nvme subsystem 00:12:30.045 treq: not required 00:12:30.045 portid: 0 00:12:30.045 trsvcid: 4420 00:12:30.045 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:30.045 traddr: 10.0.0.2 00:12:30.045 eflags: none 00:12:30.045 sectype: none 00:12:30.045 =====Discovery Log Entry 3====== 00:12:30.045 trtype: tcp 00:12:30.045 adrfam: ipv4 00:12:30.045 subtype: nvme subsystem 00:12:30.045 treq: not required 00:12:30.045 portid: 0 00:12:30.045 trsvcid: 4420 00:12:30.045 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:30.045 traddr: 10.0.0.2 00:12:30.045 eflags: none 00:12:30.045 sectype: none 00:12:30.045 =====Discovery Log Entry 4====== 00:12:30.045 trtype: tcp 00:12:30.045 adrfam: ipv4 00:12:30.045 subtype: nvme subsystem 00:12:30.045 treq: not required 00:12:30.045 portid: 0 00:12:30.045 trsvcid: 4420 00:12:30.045 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:30.045 traddr: 10.0.0.2 00:12:30.045 eflags: none 00:12:30.045 sectype: none 00:12:30.045 =====Discovery Log Entry 5====== 00:12:30.045 trtype: tcp 00:12:30.045 adrfam: ipv4 00:12:30.045 subtype: discovery subsystem referral 00:12:30.045 treq: not required 00:12:30.045 portid: 0 00:12:30.045 trsvcid: 4430 00:12:30.045 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:30.045 traddr: 10.0.0.2 00:12:30.045 eflags: none 00:12:30.045 sectype: none 00:12:30.045 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:30.045 Perform nvmf subsystem discovery via RPC 00:12:30.045 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:30.045 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.045 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:30.045 [ 00:12:30.045 { 00:12:30.045 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:30.045 "subtype": "Discovery", 00:12:30.045 "listen_addresses": [ 00:12:30.045 { 00:12:30.045 "trtype": "TCP", 00:12:30.045 "adrfam": "IPv4", 00:12:30.045 "traddr": "10.0.0.2", 00:12:30.045 "trsvcid": "4420" 00:12:30.045 } 00:12:30.045 ], 00:12:30.045 "allow_any_host": true, 00:12:30.045 "hosts": [] 00:12:30.045 }, 00:12:30.045 { 00:12:30.045 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:30.045 "subtype": "NVMe", 00:12:30.045 "listen_addresses": [ 00:12:30.045 { 00:12:30.045 "trtype": "TCP", 00:12:30.045 "adrfam": "IPv4", 00:12:30.045 "traddr": "10.0.0.2", 00:12:30.045 "trsvcid": "4420" 00:12:30.045 } 00:12:30.045 ], 00:12:30.045 "allow_any_host": true, 00:12:30.045 "hosts": [], 00:12:30.045 "serial_number": "SPDK00000000000001", 00:12:30.045 "model_number": "SPDK bdev Controller", 00:12:30.045 "max_namespaces": 32, 00:12:30.045 "min_cntlid": 1, 00:12:30.045 "max_cntlid": 65519, 00:12:30.045 "namespaces": [ 00:12:30.045 { 00:12:30.045 "nsid": 1, 00:12:30.045 "bdev_name": "Null1", 00:12:30.045 "name": "Null1", 00:12:30.045 "nguid": "FB7582D6D772417195B3DEFBB0DB3232", 00:12:30.045 "uuid": "fb7582d6-d772-4171-95b3-defbb0db3232" 00:12:30.045 } 00:12:30.045 ] 00:12:30.045 }, 00:12:30.045 { 00:12:30.045 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:30.045 "subtype": "NVMe", 00:12:30.045 "listen_addresses": [ 00:12:30.045 { 00:12:30.045 "trtype": "TCP", 00:12:30.045 "adrfam": "IPv4", 00:12:30.045 "traddr": "10.0.0.2", 00:12:30.045 "trsvcid": "4420" 00:12:30.045 } 00:12:30.045 ], 00:12:30.045 "allow_any_host": true, 00:12:30.045 "hosts": [], 00:12:30.045 "serial_number": "SPDK00000000000002", 00:12:30.045 "model_number": "SPDK bdev Controller", 00:12:30.045 "max_namespaces": 32, 00:12:30.045 "min_cntlid": 1, 00:12:30.045 "max_cntlid": 65519, 00:12:30.045 "namespaces": [ 00:12:30.045 { 00:12:30.045 "nsid": 1, 00:12:30.045 "bdev_name": "Null2", 00:12:30.045 "name": "Null2", 00:12:30.045 "nguid": "4502EA412E0E482EA8559A7D60F5C90C", 00:12:30.045 "uuid": "4502ea41-2e0e-482e-a855-9a7d60f5c90c" 00:12:30.045 } 00:12:30.045 ] 00:12:30.045 }, 00:12:30.045 { 00:12:30.045 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:30.045 "subtype": "NVMe", 00:12:30.045 "listen_addresses": [ 00:12:30.045 { 00:12:30.045 "trtype": "TCP", 00:12:30.045 "adrfam": "IPv4", 00:12:30.045 "traddr": "10.0.0.2", 00:12:30.045 "trsvcid": "4420" 00:12:30.045 } 00:12:30.045 ], 00:12:30.045 "allow_any_host": true, 00:12:30.045 "hosts": [], 00:12:30.045 "serial_number": "SPDK00000000000003", 00:12:30.045 "model_number": "SPDK bdev Controller", 00:12:30.045 "max_namespaces": 32, 00:12:30.045 "min_cntlid": 1, 00:12:30.045 "max_cntlid": 65519, 00:12:30.045 "namespaces": [ 00:12:30.045 { 00:12:30.045 "nsid": 1, 00:12:30.045 "bdev_name": "Null3", 00:12:30.045 "name": "Null3", 00:12:30.045 "nguid": "BA801A887059496289464594933850BC", 00:12:30.045 "uuid": "ba801a88-7059-4962-8946-4594933850bc" 00:12:30.045 } 00:12:30.045 ] 00:12:30.045 }, 00:12:30.045 { 00:12:30.045 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:30.045 "subtype": "NVMe", 00:12:30.045 "listen_addresses": [ 00:12:30.045 { 00:12:30.045 "trtype": "TCP", 00:12:30.045 "adrfam": "IPv4", 00:12:30.045 "traddr": "10.0.0.2", 00:12:30.045 "trsvcid": "4420" 00:12:30.045 } 00:12:30.045 ], 00:12:30.045 "allow_any_host": true, 00:12:30.045 "hosts": [], 00:12:30.045 "serial_number": "SPDK00000000000004", 00:12:30.045 "model_number": "SPDK bdev Controller", 00:12:30.045 "max_namespaces": 32, 00:12:30.045 "min_cntlid": 1, 00:12:30.046 "max_cntlid": 65519, 00:12:30.046 "namespaces": [ 00:12:30.046 { 00:12:30.046 "nsid": 1, 00:12:30.046 "bdev_name": "Null4", 00:12:30.046 "name": "Null4", 00:12:30.046 "nguid": "D88822F442974FF399ADD2B976D9756D", 00:12:30.046 "uuid": "d88822f4-4297-4ff3-99ad-d2b976d9756d" 00:12:30.046 } 00:12:30.046 ] 00:12:30.046 } 00:12:30.046 ] 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.046 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:30.306 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.306 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:30.306 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.306 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:30.306 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:30.306 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.306 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:30.307 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:30.307 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:30.307 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:30.307 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:30.307 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:12:30.307 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:30.307 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:12:30.307 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:30.307 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:30.307 rmmod nvme_tcp 00:12:30.307 rmmod nvme_fabrics 00:12:30.307 rmmod nvme_keyring 00:12:30.307 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:30.307 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:12:30.307 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:12:30.307 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 87730 ']' 00:12:30.307 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 87730 00:12:30.307 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 87730 ']' 00:12:30.307 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 87730 00:12:30.307 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:30.307 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:30.307 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87730 00:12:30.307 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:30.307 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:30.307 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87730' 00:12:30.307 killing process with pid 87730 00:12:30.307 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 87730 00:12:30.307 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 87730 00:12:31.686 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:12:31.687 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:31.687 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:31.687 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:31.687 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:31.687 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:31.687 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.687 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.687 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:33.667 00:12:33.667 real 0m7.105s 00:12:33.667 user 0m8.919s 00:12:33.667 sys 0m1.999s 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.667 ************************************ 00:12:33.667 END TEST nvmf_target_discovery 00:12:33.667 ************************************ 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:33.667 ************************************ 00:12:33.667 START TEST nvmf_referrals 00:12:33.667 ************************************ 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:33.667 * Looking for test storage... 00:12:33.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:33.667 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:12:33.668 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:35.572 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:35.572 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:35.572 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:35.572 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:12:35.572 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:35.573 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:35.573 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:35.573 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.573 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.573 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:35.573 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:35.573 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:35.573 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:35.573 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:35.573 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:35.573 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.573 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:35.573 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:35.573 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:35.573 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:35.573 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:35.573 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:35.573 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:35.573 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:35.833 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:35.833 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:35.833 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:35.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:12:35.833 00:12:35.833 --- 10.0.0.2 ping statistics --- 00:12:35.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.833 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:12:35.833 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:35.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:12:35.833 00:12:35.833 --- 10.0.0.1 ping statistics --- 00:12:35.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.833 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:12:35.833 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.833 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:12:35.833 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:35.833 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.833 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:35.833 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:35.833 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.833 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:35.833 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:35.833 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:35.833 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:35.833 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:35.833 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.833 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=89960 00:12:35.833 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:35.833 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 89960 00:12:35.833 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 89960 ']' 00:12:35.833 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.833 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:35.833 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.833 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:35.833 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.833 [2024-07-26 06:04:47.075375] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:35.833 [2024-07-26 06:04:47.075541] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.093 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.093 [2024-07-26 06:04:47.232127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:36.353 [2024-07-26 06:04:47.499387] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.353 [2024-07-26 06:04:47.499469] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.353 [2024-07-26 06:04:47.499498] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.353 [2024-07-26 06:04:47.499519] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.353 [2024-07-26 06:04:47.499542] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.353 [2024-07-26 06:04:47.499685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.353 [2024-07-26 06:04:47.499747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.353 [2024-07-26 06:04:47.499793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.353 [2024-07-26 06:04:47.499806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.919 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:36.919 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:36.919 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:36.919 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:36.919 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:36.919 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.919 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:36.919 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.919 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:36.919 [2024-07-26 06:04:48.095399] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:36.919 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.919 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:36.919 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.919 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:36.919 [2024-07-26 06:04:48.108895] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:36.919 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:36.920 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:37.178 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:37.178 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:37.178 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:37.178 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.178 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:37.178 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.178 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:37.178 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.178 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:37.178 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.178 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:37.178 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.178 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:37.178 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.178 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:37.178 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:37.178 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.178 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:37.178 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.178 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:37.178 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:37.178 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:37.178 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:37.178 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:37.178 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:37.178 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:37.437 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:37.437 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:37.437 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:37.437 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.437 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:37.437 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.437 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:37.437 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.437 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:37.437 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.437 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:37.437 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:37.437 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:37.437 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:37.437 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.437 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:37.437 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:37.437 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.437 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:37.437 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:37.437 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:37.437 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:37.437 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:37.437 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:37.437 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:37.437 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:37.698 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:37.698 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:37.698 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:37.698 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:37.698 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:37.698 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:37.698 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:37.698 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:37.698 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:37.698 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:37.698 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:37.698 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:37.698 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:37.958 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:38.217 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:38.217 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:38.217 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:38.217 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:38.217 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:38.217 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:38.217 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:38.217 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:38.217 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.217 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:38.476 rmmod nvme_tcp 00:12:38.476 rmmod nvme_fabrics 00:12:38.476 rmmod nvme_keyring 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 89960 ']' 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 89960 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 89960 ']' 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 89960 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89960 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89960' 00:12:38.476 killing process with pid 89960 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 89960 00:12:38.476 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 89960 00:12:39.858 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:12:39.858 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:39.858 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:39.858 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:39.858 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:39.858 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:39.858 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.858 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.858 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:42.397 00:12:42.397 real 0m8.306s 00:12:42.397 user 0m14.317s 00:12:42.397 sys 0m2.302s 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:42.397 ************************************ 00:12:42.397 END TEST nvmf_referrals 00:12:42.397 ************************************ 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:42.397 ************************************ 00:12:42.397 START TEST nvmf_connect_disconnect 00:12:42.397 ************************************ 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:42.397 * Looking for test storage... 00:12:42.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:12:42.397 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:44.302 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:44.302 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:44.302 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:44.302 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:44.302 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:44.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:44.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:12:44.302 00:12:44.303 --- 10.0.0.2 ping statistics --- 00:12:44.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.303 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:12:44.303 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:44.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:44.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:12:44.303 00:12:44.303 --- 10.0.0.1 ping statistics --- 00:12:44.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.303 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:12:44.303 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:44.303 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:12:44.303 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:44.303 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:44.303 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:44.303 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:44.303 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:44.303 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:44.303 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:44.303 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:44.303 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:44.303 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:44.303 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:44.303 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=92420 00:12:44.303 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:44.303 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 92420 00:12:44.303 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 92420 ']' 00:12:44.303 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.303 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:44.303 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.303 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:44.303 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:44.303 [2024-07-26 06:04:55.601933] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:44.303 [2024-07-26 06:04:55.602099] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.562 EAL: No free 2048 kB hugepages reported on node 1 00:12:44.562 [2024-07-26 06:04:55.739369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:44.822 [2024-07-26 06:04:56.000979] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.822 [2024-07-26 06:04:56.001071] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.822 [2024-07-26 06:04:56.001102] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.822 [2024-07-26 06:04:56.001126] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.822 [2024-07-26 06:04:56.001149] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.822 [2024-07-26 06:04:56.001275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.822 [2024-07-26 06:04:56.001352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.822 [2024-07-26 06:04:56.001396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.822 [2024-07-26 06:04:56.001409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.389 [2024-07-26 06:04:56.581053] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.389 [2024-07-26 06:04:56.690510] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:45.389 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:47.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.974 06:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:38.974 06:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:38.974 06:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:38.974 06:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:16:38.974 06:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:38.974 06:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:16:38.974 06:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:38.974 06:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:38.974 rmmod nvme_tcp 00:16:38.974 rmmod nvme_fabrics 00:16:38.974 rmmod nvme_keyring 00:16:38.974 06:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:38.974 06:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:16:38.974 06:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:16:38.974 06:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 92420 ']' 00:16:38.974 06:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 92420 00:16:38.974 06:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 92420 ']' 00:16:38.974 06:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 92420 00:16:38.975 06:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:16:38.975 06:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:38.975 06:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92420 00:16:38.975 06:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:38.975 06:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:38.975 06:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92420' 00:16:38.975 killing process with pid 92420 00:16:38.975 06:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 92420 00:16:38.975 06:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 92420 00:16:40.346 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:16:40.346 06:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:40.346 06:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:40.346 06:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:40.346 06:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:40.346 06:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:40.346 06:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.346 06:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:40.346 06:08:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.878 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:42.878 00:16:42.878 real 4m0.446s 00:16:42.878 user 15m8.898s 00:16:42.878 sys 0m37.300s 00:16:42.878 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:42.878 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:42.878 ************************************ 00:16:42.878 END TEST nvmf_connect_disconnect 00:16:42.878 ************************************ 00:16:42.878 06:08:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:42.878 06:08:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:42.878 06:08:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:42.878 06:08:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:42.878 ************************************ 00:16:42.878 START TEST nvmf_multitarget 00:16:42.878 ************************************ 00:16:42.878 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:42.878 * Looking for test storage... 00:16:42.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:42.878 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:42.878 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:42.878 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.878 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.878 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.878 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.878 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.878 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.878 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.878 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.878 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.878 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.878 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.878 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:16:42.879 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:44.782 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:44.783 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:44.783 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:44.783 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:44.783 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:44.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:16:44.783 00:16:44.783 --- 10.0.0.2 ping statistics --- 00:16:44.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.783 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:44.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:16:44.783 00:16:44.783 --- 10.0.0.1 ping statistics --- 00:16:44.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.783 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=123955 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 123955 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 123955 ']' 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:44.783 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:44.783 [2024-07-26 06:08:56.001986] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:44.783 [2024-07-26 06:08:56.002172] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.783 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.042 [2024-07-26 06:08:56.136033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:45.300 [2024-07-26 06:08:56.397767] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.300 [2024-07-26 06:08:56.397841] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.300 [2024-07-26 06:08:56.397870] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:45.300 [2024-07-26 06:08:56.397892] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:45.300 [2024-07-26 06:08:56.397913] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.300 [2024-07-26 06:08:56.398042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.300 [2024-07-26 06:08:56.398125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:45.300 [2024-07-26 06:08:56.398148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.300 [2024-07-26 06:08:56.398160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:45.864 06:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:45.864 06:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:16:45.864 06:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:45.864 06:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:45.864 06:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:45.864 06:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.864 06:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:45.864 06:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:45.864 06:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:45.864 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:45.864 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:45.864 "nvmf_tgt_1" 00:16:45.864 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:46.121 "nvmf_tgt_2" 00:16:46.121 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:46.121 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:46.121 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:46.121 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:46.379 true 00:16:46.379 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:46.379 true 00:16:46.379 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:46.379 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:46.379 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:46.379 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:46.379 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:46.379 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:46.379 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:16:46.636 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:46.636 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:16:46.637 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:46.637 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:46.637 rmmod nvme_tcp 00:16:46.637 rmmod nvme_fabrics 00:16:46.637 rmmod nvme_keyring 00:16:46.637 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:46.637 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:16:46.637 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:16:46.637 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 123955 ']' 00:16:46.637 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 123955 00:16:46.637 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 123955 ']' 00:16:46.637 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 123955 00:16:46.637 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:16:46.637 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:46.637 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 123955 00:16:46.637 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:46.637 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:46.637 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 123955' 00:16:46.637 killing process with pid 123955 00:16:46.637 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 123955 00:16:46.637 06:08:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 123955 00:16:48.010 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:16:48.010 06:08:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:48.010 06:08:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:48.010 06:08:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:48.010 06:08:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:48.010 06:08:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:48.010 06:08:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.010 06:08:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:48.010 06:08:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:49.912 00:16:49.912 real 0m7.426s 00:16:49.912 user 0m11.117s 00:16:49.912 sys 0m2.105s 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:49.912 ************************************ 00:16:49.912 END TEST nvmf_multitarget 00:16:49.912 ************************************ 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:49.912 ************************************ 00:16:49.912 START TEST nvmf_rpc 00:16:49.912 ************************************ 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:49.912 * Looking for test storage... 00:16:49.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:16:49.912 06:09:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:51.814 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:51.814 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:51.814 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:51.814 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:51.814 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:52.072 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:52.072 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:52.072 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:52.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:52.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:16:52.072 00:16:52.072 --- 10.0.0.2 ping statistics --- 00:16:52.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.072 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:16:52.072 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:52.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:52.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:16:52.072 00:16:52.072 --- 10.0.0.1 ping statistics --- 00:16:52.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.072 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:16:52.072 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:52.072 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:16:52.072 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:52.072 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:52.072 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:52.072 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:52.072 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:52.072 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:52.072 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:52.072 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:52.072 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:52.072 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:52.072 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.072 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=126302 00:16:52.072 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:52.072 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 126302 00:16:52.072 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 126302 ']' 00:16:52.072 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.073 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:52.073 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.073 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:52.073 06:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.073 [2024-07-26 06:09:03.309845] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:52.073 [2024-07-26 06:09:03.309996] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.073 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.330 [2024-07-26 06:09:03.451281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:52.588 [2024-07-26 06:09:03.716872] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.588 [2024-07-26 06:09:03.716951] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.588 [2024-07-26 06:09:03.716979] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.588 [2024-07-26 06:09:03.717001] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.588 [2024-07-26 06:09:03.717023] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.588 [2024-07-26 06:09:03.717149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.588 [2024-07-26 06:09:03.717206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:52.588 [2024-07-26 06:09:03.717250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.588 [2024-07-26 06:09:03.717262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:53.155 "tick_rate": 2700000000, 00:16:53.155 "poll_groups": [ 00:16:53.155 { 00:16:53.155 "name": "nvmf_tgt_poll_group_000", 00:16:53.155 "admin_qpairs": 0, 00:16:53.155 "io_qpairs": 0, 00:16:53.155 "current_admin_qpairs": 0, 00:16:53.155 "current_io_qpairs": 0, 00:16:53.155 "pending_bdev_io": 0, 00:16:53.155 "completed_nvme_io": 0, 00:16:53.155 "transports": [] 00:16:53.155 }, 00:16:53.155 { 00:16:53.155 "name": "nvmf_tgt_poll_group_001", 00:16:53.155 "admin_qpairs": 0, 00:16:53.155 "io_qpairs": 0, 00:16:53.155 "current_admin_qpairs": 0, 00:16:53.155 "current_io_qpairs": 0, 00:16:53.155 "pending_bdev_io": 0, 00:16:53.155 "completed_nvme_io": 0, 00:16:53.155 "transports": [] 00:16:53.155 }, 00:16:53.155 { 00:16:53.155 "name": "nvmf_tgt_poll_group_002", 00:16:53.155 "admin_qpairs": 0, 00:16:53.155 "io_qpairs": 0, 00:16:53.155 "current_admin_qpairs": 0, 00:16:53.155 "current_io_qpairs": 0, 00:16:53.155 "pending_bdev_io": 0, 00:16:53.155 "completed_nvme_io": 0, 00:16:53.155 "transports": [] 00:16:53.155 }, 00:16:53.155 { 00:16:53.155 "name": "nvmf_tgt_poll_group_003", 00:16:53.155 "admin_qpairs": 0, 00:16:53.155 "io_qpairs": 0, 00:16:53.155 "current_admin_qpairs": 0, 00:16:53.155 "current_io_qpairs": 0, 00:16:53.155 "pending_bdev_io": 0, 00:16:53.155 "completed_nvme_io": 0, 00:16:53.155 "transports": [] 00:16:53.155 } 00:16:53.155 ] 00:16:53.155 }' 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.155 [2024-07-26 06:09:04.372286] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.155 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:53.155 "tick_rate": 2700000000, 00:16:53.155 "poll_groups": [ 00:16:53.155 { 00:16:53.155 "name": "nvmf_tgt_poll_group_000", 00:16:53.155 "admin_qpairs": 0, 00:16:53.155 "io_qpairs": 0, 00:16:53.155 "current_admin_qpairs": 0, 00:16:53.155 "current_io_qpairs": 0, 00:16:53.155 "pending_bdev_io": 0, 00:16:53.155 "completed_nvme_io": 0, 00:16:53.155 "transports": [ 00:16:53.155 { 00:16:53.155 "trtype": "TCP" 00:16:53.155 } 00:16:53.155 ] 00:16:53.155 }, 00:16:53.155 { 00:16:53.155 "name": "nvmf_tgt_poll_group_001", 00:16:53.155 "admin_qpairs": 0, 00:16:53.155 "io_qpairs": 0, 00:16:53.155 "current_admin_qpairs": 0, 00:16:53.155 "current_io_qpairs": 0, 00:16:53.155 "pending_bdev_io": 0, 00:16:53.155 "completed_nvme_io": 0, 00:16:53.155 "transports": [ 00:16:53.155 { 00:16:53.155 "trtype": "TCP" 00:16:53.155 } 00:16:53.155 ] 00:16:53.155 }, 00:16:53.155 { 00:16:53.155 "name": "nvmf_tgt_poll_group_002", 00:16:53.155 "admin_qpairs": 0, 00:16:53.155 "io_qpairs": 0, 00:16:53.155 "current_admin_qpairs": 0, 00:16:53.155 "current_io_qpairs": 0, 00:16:53.155 "pending_bdev_io": 0, 00:16:53.155 "completed_nvme_io": 0, 00:16:53.155 "transports": [ 00:16:53.155 { 00:16:53.155 "trtype": "TCP" 00:16:53.155 } 00:16:53.155 ] 00:16:53.155 }, 00:16:53.155 { 00:16:53.155 "name": "nvmf_tgt_poll_group_003", 00:16:53.155 "admin_qpairs": 0, 00:16:53.155 "io_qpairs": 0, 00:16:53.156 "current_admin_qpairs": 0, 00:16:53.156 "current_io_qpairs": 0, 00:16:53.156 "pending_bdev_io": 0, 00:16:53.156 "completed_nvme_io": 0, 00:16:53.156 "transports": [ 00:16:53.156 { 00:16:53.156 "trtype": "TCP" 00:16:53.156 } 00:16:53.156 ] 00:16:53.156 } 00:16:53.156 ] 00:16:53.156 }' 00:16:53.156 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:53.156 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:53.156 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:53.156 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:53.156 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:53.156 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:53.156 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:53.156 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:53.156 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:53.156 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:53.156 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:53.156 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:53.156 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:53.156 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:53.156 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.156 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.454 Malloc1 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.454 [2024-07-26 06:09:04.582512] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:53.454 [2024-07-26 06:09:04.605636] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:53.454 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:53.454 could not add new controller: failed to write to nvme-fabrics device 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.454 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:54.040 06:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:54.040 06:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:54.040 06:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:54.040 06:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:54.040 06:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:55.943 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:55.943 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:55.943 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:55.944 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:55.944 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:55.944 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:55.944 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:56.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:56.202 [2024-07-26 06:09:07.465001] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:56.202 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:56.202 could not add new controller: failed to write to nvme-fabrics device 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.202 06:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:57.137 06:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:57.137 06:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:57.137 06:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:57.137 06:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:57.137 06:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:59.038 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:59.038 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:59.038 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:59.038 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:59.038 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:59.038 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:59.038 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:59.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.298 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:59.298 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:59.298 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:59.298 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:59.299 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:59.299 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:59.299 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:59.299 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:59.299 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.299 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.299 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.299 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:59.299 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:59.299 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:59.299 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.299 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.299 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.299 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:59.299 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.299 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.299 [2024-07-26 06:09:10.545696] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.299 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.299 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:59.299 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.299 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.299 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.299 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:59.299 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.299 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.299 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.299 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:00.236 06:09:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:00.236 06:09:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:00.236 06:09:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:00.236 06:09:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:00.236 06:09:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:02.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.140 [2024-07-26 06:09:13.461934] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.140 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.400 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.400 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:02.400 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.400 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.400 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.400 06:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:02.966 06:09:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:02.966 06:09:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:02.967 06:09:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:02.967 06:09:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:02.967 06:09:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:04.868 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:04.868 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:04.868 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:04.868 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:04.868 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:04.868 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:04.868 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:05.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.126 [2024-07-26 06:09:16.340764] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.126 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:06.063 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:06.063 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:06.063 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:06.063 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:06.063 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:07.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.966 [2024-07-26 06:09:19.220429] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.966 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:08.532 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:08.532 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:08.532 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:08.532 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:08.532 06:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:11.095 06:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:11.096 06:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:11.096 06:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:11.096 06:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:11.096 06:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:11.096 06:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:11.096 06:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:11.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.096 [2024-07-26 06:09:22.063004] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.096 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:11.664 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:11.664 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:11.664 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:11.664 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:11.664 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:13.565 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:13.565 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:13.565 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:13.565 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:13.565 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:13.565 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:13.565 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:13.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:13.825 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:13.825 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:13.825 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:13.825 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:13.825 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:13.825 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:13.825 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:13.825 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.826 [2024-07-26 06:09:24.953856] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.826 06:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.826 [2024-07-26 06:09:25.001884] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.826 [2024-07-26 06:09:25.050031] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.826 [2024-07-26 06:09:25.098231] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.826 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.827 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:13.827 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.827 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.827 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.827 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:13.827 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:13.827 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.827 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.827 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.827 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:13.827 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.827 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.827 [2024-07-26 06:09:25.146432] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.827 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.827 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:13.827 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.827 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.087 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.087 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:14.087 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.087 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.087 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.087 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:14.087 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.087 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.087 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.087 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:14.087 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.087 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.087 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.087 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:14.087 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.087 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:14.087 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:14.088 "tick_rate": 2700000000, 00:17:14.088 "poll_groups": [ 00:17:14.088 { 00:17:14.088 "name": "nvmf_tgt_poll_group_000", 00:17:14.088 "admin_qpairs": 2, 00:17:14.088 "io_qpairs": 84, 00:17:14.088 "current_admin_qpairs": 0, 00:17:14.088 "current_io_qpairs": 0, 00:17:14.088 "pending_bdev_io": 0, 00:17:14.088 "completed_nvme_io": 185, 00:17:14.088 "transports": [ 00:17:14.088 { 00:17:14.088 "trtype": "TCP" 00:17:14.088 } 00:17:14.088 ] 00:17:14.088 }, 00:17:14.088 { 00:17:14.088 "name": "nvmf_tgt_poll_group_001", 00:17:14.088 "admin_qpairs": 2, 00:17:14.088 "io_qpairs": 84, 00:17:14.088 "current_admin_qpairs": 0, 00:17:14.088 "current_io_qpairs": 0, 00:17:14.088 "pending_bdev_io": 0, 00:17:14.088 "completed_nvme_io": 184, 00:17:14.088 "transports": [ 00:17:14.088 { 00:17:14.088 "trtype": "TCP" 00:17:14.088 } 00:17:14.088 ] 00:17:14.088 }, 00:17:14.088 { 00:17:14.088 "name": "nvmf_tgt_poll_group_002", 00:17:14.088 "admin_qpairs": 1, 00:17:14.088 "io_qpairs": 84, 00:17:14.088 "current_admin_qpairs": 0, 00:17:14.088 "current_io_qpairs": 0, 00:17:14.088 "pending_bdev_io": 0, 00:17:14.088 "completed_nvme_io": 134, 00:17:14.088 "transports": [ 00:17:14.088 { 00:17:14.088 "trtype": "TCP" 00:17:14.088 } 00:17:14.088 ] 00:17:14.088 }, 00:17:14.088 { 00:17:14.088 "name": "nvmf_tgt_poll_group_003", 00:17:14.088 "admin_qpairs": 2, 00:17:14.088 "io_qpairs": 84, 00:17:14.088 "current_admin_qpairs": 0, 00:17:14.088 "current_io_qpairs": 0, 00:17:14.088 "pending_bdev_io": 0, 00:17:14.088 "completed_nvme_io": 183, 00:17:14.088 "transports": [ 00:17:14.088 { 00:17:14.088 "trtype": "TCP" 00:17:14.088 } 00:17:14.088 ] 00:17:14.088 } 00:17:14.088 ] 00:17:14.088 }' 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:14.088 rmmod nvme_tcp 00:17:14.088 rmmod nvme_fabrics 00:17:14.088 rmmod nvme_keyring 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 126302 ']' 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 126302 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 126302 ']' 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 126302 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 126302 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 126302' 00:17:14.088 killing process with pid 126302 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 126302 00:17:14.088 06:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 126302 00:17:15.466 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:17:15.726 06:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:15.726 06:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:15.726 06:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:15.726 06:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:15.726 06:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:15.726 06:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.726 06:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:15.726 06:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.635 06:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:17.635 00:17:17.635 real 0m27.747s 00:17:17.635 user 1m29.181s 00:17:17.635 sys 0m4.354s 00:17:17.635 06:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:17.635 06:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.635 ************************************ 00:17:17.635 END TEST nvmf_rpc 00:17:17.635 ************************************ 00:17:17.635 06:09:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:17.635 06:09:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:17.635 06:09:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:17.635 06:09:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:17.635 ************************************ 00:17:17.635 START TEST nvmf_invalid 00:17:17.635 ************************************ 00:17:17.635 06:09:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:17.894 * Looking for test storage... 00:17:17.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:17.895 06:09:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:17.895 06:09:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:17.895 06:09:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.895 06:09:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.895 06:09:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.895 06:09:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.895 06:09:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.895 06:09:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.895 06:09:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.895 06:09:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.895 06:09:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.895 06:09:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:17.895 06:09:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:17.895 06:09:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:17.895 06:09:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.895 06:09:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:17.895 06:09:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:17.895 06:09:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:17.895 06:09:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:17:17.895 06:09:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:19.800 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:19.801 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:19.801 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:19.801 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:19.801 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:19.801 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:19.801 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:19.801 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:19.801 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:19.801 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:19.801 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:19.801 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:19.801 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:19.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:19.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:17:19.801 00:17:19.801 --- 10.0.0.2 ping statistics --- 00:17:19.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.801 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:17:19.801 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:19.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:19.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:17:19.801 00:17:19.801 --- 10.0.0.1 ping statistics --- 00:17:19.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.801 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:19.801 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:19.801 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:17:19.801 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:19.801 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:19.801 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:19.801 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:19.801 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:19.801 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:19.801 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:20.060 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:20.060 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:20.060 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:20.060 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:20.060 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=131673 00:17:20.060 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:20.060 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 131673 00:17:20.060 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 131673 ']' 00:17:20.060 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.060 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:20.060 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.060 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:20.060 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:20.060 [2024-07-26 06:09:31.236043] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:20.060 [2024-07-26 06:09:31.236222] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.060 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.060 [2024-07-26 06:09:31.374094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:20.319 [2024-07-26 06:09:31.638785] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.319 [2024-07-26 06:09:31.638864] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.319 [2024-07-26 06:09:31.638893] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.319 [2024-07-26 06:09:31.638915] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.319 [2024-07-26 06:09:31.638937] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.319 [2024-07-26 06:09:31.639056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.319 [2024-07-26 06:09:31.639118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.320 [2024-07-26 06:09:31.639143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.320 [2024-07-26 06:09:31.639157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:20.886 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:20.886 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:17:20.886 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:20.886 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:20.886 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:20.886 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.886 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:20.886 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode14681 00:17:21.144 [2024-07-26 06:09:32.431580] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:21.144 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:21.144 { 00:17:21.144 "nqn": "nqn.2016-06.io.spdk:cnode14681", 00:17:21.144 "tgt_name": "foobar", 00:17:21.144 "method": "nvmf_create_subsystem", 00:17:21.144 "req_id": 1 00:17:21.144 } 00:17:21.144 Got JSON-RPC error response 00:17:21.144 response: 00:17:21.144 { 00:17:21.144 "code": -32603, 00:17:21.144 "message": "Unable to find target foobar" 00:17:21.144 }' 00:17:21.144 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:21.144 { 00:17:21.144 "nqn": "nqn.2016-06.io.spdk:cnode14681", 00:17:21.144 "tgt_name": "foobar", 00:17:21.144 "method": "nvmf_create_subsystem", 00:17:21.144 "req_id": 1 00:17:21.144 } 00:17:21.144 Got JSON-RPC error response 00:17:21.144 response: 00:17:21.144 { 00:17:21.144 "code": -32603, 00:17:21.144 "message": "Unable to find target foobar" 00:17:21.144 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:21.144 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:21.144 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode26873 00:17:21.417 [2024-07-26 06:09:32.676480] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26873: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:21.417 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:21.417 { 00:17:21.417 "nqn": "nqn.2016-06.io.spdk:cnode26873", 00:17:21.417 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:21.417 "method": "nvmf_create_subsystem", 00:17:21.417 "req_id": 1 00:17:21.417 } 00:17:21.417 Got JSON-RPC error response 00:17:21.417 response: 00:17:21.417 { 00:17:21.417 "code": -32602, 00:17:21.417 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:21.417 }' 00:17:21.417 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:21.417 { 00:17:21.417 "nqn": "nqn.2016-06.io.spdk:cnode26873", 00:17:21.417 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:21.417 "method": "nvmf_create_subsystem", 00:17:21.417 "req_id": 1 00:17:21.417 } 00:17:21.417 Got JSON-RPC error response 00:17:21.417 response: 00:17:21.417 { 00:17:21.417 "code": -32602, 00:17:21.417 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:21.417 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:21.417 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:21.417 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode8406 00:17:21.674 [2024-07-26 06:09:32.933280] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8406: invalid model number 'SPDK_Controller' 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:21.675 { 00:17:21.675 "nqn": "nqn.2016-06.io.spdk:cnode8406", 00:17:21.675 "model_number": "SPDK_Controller\u001f", 00:17:21.675 "method": "nvmf_create_subsystem", 00:17:21.675 "req_id": 1 00:17:21.675 } 00:17:21.675 Got JSON-RPC error response 00:17:21.675 response: 00:17:21.675 { 00:17:21.675 "code": -32602, 00:17:21.675 "message": "Invalid MN SPDK_Controller\u001f" 00:17:21.675 }' 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:21.675 { 00:17:21.675 "nqn": "nqn.2016-06.io.spdk:cnode8406", 00:17:21.675 "model_number": "SPDK_Controller\u001f", 00:17:21.675 "method": "nvmf_create_subsystem", 00:17:21.675 "req_id": 1 00:17:21.675 } 00:17:21.675 Got JSON-RPC error response 00:17:21.675 response: 00:17:21.675 { 00:17:21.675 "code": -32602, 00:17:21.675 "message": "Invalid MN SPDK_Controller\u001f" 00:17:21.675 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.675 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:21.676 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:21.676 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:21.676 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.676 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.676 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:17:21.676 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:17:21.676 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:17:21.676 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.676 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.676 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:17:21.676 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:17:21.676 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:17:21.676 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.676 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.676 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:21.676 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:21.676 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:21.676 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.676 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.676 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:21.676 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:21.676 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:21.676 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.676 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.676 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:21.676 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:21.676 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:21.676 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.676 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ a == \- ]] 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'aGTmvaW!LU" c;*,'\''\A)V' 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'aGTmvaW!LU" c;*,'\''\A)V' nqn.2016-06.io.spdk:cnode24377 00:17:21.935 [2024-07-26 06:09:33.246373] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24377: invalid serial number 'aGTmvaW!LU" c;*,'\A)V' 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:21.935 { 00:17:21.935 "nqn": "nqn.2016-06.io.spdk:cnode24377", 00:17:21.935 "serial_number": "aGTmvaW!LU\" c;*,'\''\\A)V", 00:17:21.935 "method": "nvmf_create_subsystem", 00:17:21.935 "req_id": 1 00:17:21.935 } 00:17:21.935 Got JSON-RPC error response 00:17:21.935 response: 00:17:21.935 { 00:17:21.935 "code": -32602, 00:17:21.935 "message": "Invalid SN aGTmvaW!LU\" c;*,'\''\\A)V" 00:17:21.935 }' 00:17:21.935 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:21.935 { 00:17:21.935 "nqn": "nqn.2016-06.io.spdk:cnode24377", 00:17:21.935 "serial_number": "aGTmvaW!LU\" c;*,'\\A)V", 00:17:21.935 "method": "nvmf_create_subsystem", 00:17:21.935 "req_id": 1 00:17:21.935 } 00:17:21.935 Got JSON-RPC error response 00:17:21.935 response: 00:17:21.935 { 00:17:21.935 "code": -32602, 00:17:21.935 "message": "Invalid SN aGTmvaW!LU\" c;*,'\\A)V" 00:17:21.935 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:22.194 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.195 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ; == \- ]] 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ';*tFDzK1Qf"ln~TNhf:f1`7fbQ;Mhydkec*F#)$;>' 00:17:22.196 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ';*tFDzK1Qf"ln~TNhf:f1`7fbQ;Mhydkec*F#)$;>' nqn.2016-06.io.spdk:cnode1403 00:17:22.453 [2024-07-26 06:09:33.643727] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1403: invalid model number ';*tFDzK1Qf"ln~TNhf:f1`7fbQ;Mhydkec*F#)$;>' 00:17:22.453 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:22.453 { 00:17:22.453 "nqn": "nqn.2016-06.io.spdk:cnode1403", 00:17:22.453 "model_number": ";*tFDzK1Qf\"ln~TNhf:f1`7fbQ;Mhydkec*F#)$;>", 00:17:22.453 "method": "nvmf_create_subsystem", 00:17:22.453 "req_id": 1 00:17:22.453 } 00:17:22.453 Got JSON-RPC error response 00:17:22.453 response: 00:17:22.453 { 00:17:22.453 "code": -32602, 00:17:22.453 "message": "Invalid MN ;*tFDzK1Qf\"ln~TNhf:f1`7fbQ;Mhydkec*F#)$;>" 00:17:22.453 }' 00:17:22.453 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:22.453 { 00:17:22.453 "nqn": "nqn.2016-06.io.spdk:cnode1403", 00:17:22.453 "model_number": ";*tFDzK1Qf\"ln~TNhf:f1`7fbQ;Mhydkec*F#)$;>", 00:17:22.453 "method": "nvmf_create_subsystem", 00:17:22.453 "req_id": 1 00:17:22.453 } 00:17:22.453 Got JSON-RPC error response 00:17:22.453 response: 00:17:22.453 { 00:17:22.453 "code": -32602, 00:17:22.453 "message": "Invalid MN ;*tFDzK1Qf\"ln~TNhf:f1`7fbQ;Mhydkec*F#)$;>" 00:17:22.453 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:22.453 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:22.711 [2024-07-26 06:09:33.896671] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:22.711 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:22.968 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:22.968 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:22.968 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:22.968 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:22.968 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:23.225 [2024-07-26 06:09:34.407478] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:23.225 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:23.225 { 00:17:23.225 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:23.225 "listen_address": { 00:17:23.225 "trtype": "tcp", 00:17:23.225 "traddr": "", 00:17:23.225 "trsvcid": "4421" 00:17:23.225 }, 00:17:23.225 "method": "nvmf_subsystem_remove_listener", 00:17:23.225 "req_id": 1 00:17:23.225 } 00:17:23.225 Got JSON-RPC error response 00:17:23.225 response: 00:17:23.225 { 00:17:23.225 "code": -32602, 00:17:23.225 "message": "Invalid parameters" 00:17:23.225 }' 00:17:23.225 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:23.225 { 00:17:23.225 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:23.225 "listen_address": { 00:17:23.225 "trtype": "tcp", 00:17:23.225 "traddr": "", 00:17:23.225 "trsvcid": "4421" 00:17:23.225 }, 00:17:23.225 "method": "nvmf_subsystem_remove_listener", 00:17:23.225 "req_id": 1 00:17:23.225 } 00:17:23.225 Got JSON-RPC error response 00:17:23.225 response: 00:17:23.225 { 00:17:23.225 "code": -32602, 00:17:23.225 "message": "Invalid parameters" 00:17:23.225 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:23.226 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1276 -i 0 00:17:23.483 [2024-07-26 06:09:34.652302] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1276: invalid cntlid range [0-65519] 00:17:23.483 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:23.483 { 00:17:23.483 "nqn": "nqn.2016-06.io.spdk:cnode1276", 00:17:23.483 "min_cntlid": 0, 00:17:23.483 "method": "nvmf_create_subsystem", 00:17:23.483 "req_id": 1 00:17:23.483 } 00:17:23.483 Got JSON-RPC error response 00:17:23.483 response: 00:17:23.483 { 00:17:23.483 "code": -32602, 00:17:23.483 "message": "Invalid cntlid range [0-65519]" 00:17:23.483 }' 00:17:23.483 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:23.483 { 00:17:23.483 "nqn": "nqn.2016-06.io.spdk:cnode1276", 00:17:23.483 "min_cntlid": 0, 00:17:23.483 "method": "nvmf_create_subsystem", 00:17:23.483 "req_id": 1 00:17:23.483 } 00:17:23.483 Got JSON-RPC error response 00:17:23.483 response: 00:17:23.483 { 00:17:23.483 "code": -32602, 00:17:23.483 "message": "Invalid cntlid range [0-65519]" 00:17:23.483 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:23.483 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29901 -i 65520 00:17:23.741 [2024-07-26 06:09:34.897136] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29901: invalid cntlid range [65520-65519] 00:17:23.741 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:23.741 { 00:17:23.741 "nqn": "nqn.2016-06.io.spdk:cnode29901", 00:17:23.741 "min_cntlid": 65520, 00:17:23.741 "method": "nvmf_create_subsystem", 00:17:23.741 "req_id": 1 00:17:23.741 } 00:17:23.741 Got JSON-RPC error response 00:17:23.741 response: 00:17:23.741 { 00:17:23.741 "code": -32602, 00:17:23.741 "message": "Invalid cntlid range [65520-65519]" 00:17:23.741 }' 00:17:23.741 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:23.741 { 00:17:23.741 "nqn": "nqn.2016-06.io.spdk:cnode29901", 00:17:23.741 "min_cntlid": 65520, 00:17:23.741 "method": "nvmf_create_subsystem", 00:17:23.741 "req_id": 1 00:17:23.741 } 00:17:23.741 Got JSON-RPC error response 00:17:23.741 response: 00:17:23.741 { 00:17:23.741 "code": -32602, 00:17:23.741 "message": "Invalid cntlid range [65520-65519]" 00:17:23.741 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:23.741 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11488 -I 0 00:17:23.999 [2024-07-26 06:09:35.154015] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11488: invalid cntlid range [1-0] 00:17:23.999 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:23.999 { 00:17:23.999 "nqn": "nqn.2016-06.io.spdk:cnode11488", 00:17:23.999 "max_cntlid": 0, 00:17:23.999 "method": "nvmf_create_subsystem", 00:17:23.999 "req_id": 1 00:17:23.999 } 00:17:23.999 Got JSON-RPC error response 00:17:23.999 response: 00:17:23.999 { 00:17:23.999 "code": -32602, 00:17:23.999 "message": "Invalid cntlid range [1-0]" 00:17:23.999 }' 00:17:23.999 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:23.999 { 00:17:23.999 "nqn": "nqn.2016-06.io.spdk:cnode11488", 00:17:23.999 "max_cntlid": 0, 00:17:23.999 "method": "nvmf_create_subsystem", 00:17:23.999 "req_id": 1 00:17:23.999 } 00:17:23.999 Got JSON-RPC error response 00:17:23.999 response: 00:17:23.999 { 00:17:23.999 "code": -32602, 00:17:23.999 "message": "Invalid cntlid range [1-0]" 00:17:23.999 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:23.999 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23272 -I 65520 00:17:24.258 [2024-07-26 06:09:35.402900] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23272: invalid cntlid range [1-65520] 00:17:24.258 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:24.258 { 00:17:24.258 "nqn": "nqn.2016-06.io.spdk:cnode23272", 00:17:24.258 "max_cntlid": 65520, 00:17:24.258 "method": "nvmf_create_subsystem", 00:17:24.258 "req_id": 1 00:17:24.258 } 00:17:24.258 Got JSON-RPC error response 00:17:24.258 response: 00:17:24.258 { 00:17:24.258 "code": -32602, 00:17:24.258 "message": "Invalid cntlid range [1-65520]" 00:17:24.258 }' 00:17:24.258 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:24.258 { 00:17:24.258 "nqn": "nqn.2016-06.io.spdk:cnode23272", 00:17:24.258 "max_cntlid": 65520, 00:17:24.258 "method": "nvmf_create_subsystem", 00:17:24.258 "req_id": 1 00:17:24.258 } 00:17:24.258 Got JSON-RPC error response 00:17:24.258 response: 00:17:24.258 { 00:17:24.258 "code": -32602, 00:17:24.258 "message": "Invalid cntlid range [1-65520]" 00:17:24.258 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:24.258 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9817 -i 6 -I 5 00:17:24.516 [2024-07-26 06:09:35.651758] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9817: invalid cntlid range [6-5] 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:24.516 { 00:17:24.516 "nqn": "nqn.2016-06.io.spdk:cnode9817", 00:17:24.516 "min_cntlid": 6, 00:17:24.516 "max_cntlid": 5, 00:17:24.516 "method": "nvmf_create_subsystem", 00:17:24.516 "req_id": 1 00:17:24.516 } 00:17:24.516 Got JSON-RPC error response 00:17:24.516 response: 00:17:24.516 { 00:17:24.516 "code": -32602, 00:17:24.516 "message": "Invalid cntlid range [6-5]" 00:17:24.516 }' 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:24.516 { 00:17:24.516 "nqn": "nqn.2016-06.io.spdk:cnode9817", 00:17:24.516 "min_cntlid": 6, 00:17:24.516 "max_cntlid": 5, 00:17:24.516 "method": "nvmf_create_subsystem", 00:17:24.516 "req_id": 1 00:17:24.516 } 00:17:24.516 Got JSON-RPC error response 00:17:24.516 response: 00:17:24.516 { 00:17:24.516 "code": -32602, 00:17:24.516 "message": "Invalid cntlid range [6-5]" 00:17:24.516 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:24.516 { 00:17:24.516 "name": "foobar", 00:17:24.516 "method": "nvmf_delete_target", 00:17:24.516 "req_id": 1 00:17:24.516 } 00:17:24.516 Got JSON-RPC error response 00:17:24.516 response: 00:17:24.516 { 00:17:24.516 "code": -32602, 00:17:24.516 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:24.516 }' 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:24.516 { 00:17:24.516 "name": "foobar", 00:17:24.516 "method": "nvmf_delete_target", 00:17:24.516 "req_id": 1 00:17:24.516 } 00:17:24.516 Got JSON-RPC error response 00:17:24.516 response: 00:17:24.516 { 00:17:24.516 "code": -32602, 00:17:24.516 "message": "The specified target doesn't exist, cannot delete it." 00:17:24.516 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:24.516 rmmod nvme_tcp 00:17:24.516 rmmod nvme_fabrics 00:17:24.516 rmmod nvme_keyring 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 131673 ']' 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 131673 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 131673 ']' 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 131673 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 131673 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 131673' 00:17:24.516 killing process with pid 131673 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 131673 00:17:24.516 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 131673 00:17:25.894 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:17:25.894 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:25.894 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:25.894 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:25.894 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:25.894 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:25.894 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.894 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:25.894 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:28.428 00:17:28.428 real 0m10.204s 00:17:28.428 user 0m24.463s 00:17:28.428 sys 0m2.574s 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:28.428 ************************************ 00:17:28.428 END TEST nvmf_invalid 00:17:28.428 ************************************ 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:28.428 ************************************ 00:17:28.428 START TEST nvmf_connect_stress 00:17:28.428 ************************************ 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:28.428 * Looking for test storage... 00:17:28.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:28.428 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:28.429 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:28.429 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.429 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.429 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.429 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:28.429 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:28.429 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:17:28.429 06:09:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:29.802 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:29.802 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:29.802 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:29.802 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:29.802 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:30.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:17:30.063 00:17:30.063 --- 10.0.0.2 ping statistics --- 00:17:30.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.063 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:30.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:17:30.063 00:17:30.063 --- 10.0.0.1 ping statistics --- 00:17:30.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.063 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=134440 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 134440 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 134440 ']' 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:30.063 06:09:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.063 [2024-07-26 06:09:41.379500] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:30.063 [2024-07-26 06:09:41.379668] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.322 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.322 [2024-07-26 06:09:41.535086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:30.580 [2024-07-26 06:09:41.800721] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.580 [2024-07-26 06:09:41.800804] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.580 [2024-07-26 06:09:41.800838] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.580 [2024-07-26 06:09:41.800859] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.580 [2024-07-26 06:09:41.800881] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.580 [2024-07-26 06:09:41.801015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.580 [2024-07-26 06:09:41.801094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.580 [2024-07-26 06:09:41.801103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:31.146 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:31.146 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:17:31.146 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:31.146 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:31.146 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.146 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.146 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:31.146 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.146 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.146 [2024-07-26 06:09:42.345934] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.146 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.146 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:31.146 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.146 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.146 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.146 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.146 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.146 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.146 [2024-07-26 06:09:42.378628] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.146 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.146 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:31.146 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.146 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.146 NULL1 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=134595 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.147 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.147 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.713 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.713 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:31.713 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.713 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.713 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.033 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.033 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:32.033 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.033 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.033 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.289 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.289 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:32.289 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.289 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.289 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.547 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.547 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:32.547 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.547 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.547 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.804 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.804 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:32.804 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.804 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.804 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:33.061 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.061 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:33.061 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:33.061 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.061 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:33.630 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.630 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:33.630 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:33.630 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.630 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:33.888 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.888 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:33.888 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:33.888 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.888 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.146 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.146 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:34.146 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.146 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.146 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.404 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.404 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:34.404 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.404 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.404 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.663 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.663 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:34.923 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.923 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.923 06:09:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:35.182 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.182 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:35.182 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:35.182 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.182 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:35.441 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.441 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:35.441 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:35.441 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.441 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:35.700 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.700 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:35.700 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:35.700 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.700 06:09:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:36.266 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.266 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:36.266 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:36.266 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.266 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:36.526 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.526 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:36.526 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:36.526 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.526 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:36.786 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.786 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:36.786 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:36.786 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.786 06:09:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:37.045 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.045 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:37.045 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:37.045 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.045 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:37.304 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.304 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:37.304 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:37.304 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.304 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:37.869 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.869 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:37.869 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:37.869 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.869 06:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.128 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.128 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:38.128 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:38.128 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.128 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.387 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.387 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:38.387 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:38.387 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.387 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.647 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.647 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:38.647 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:38.647 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.647 06:09:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.905 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.905 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:38.905 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:38.905 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.905 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:39.471 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.471 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:39.471 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:39.471 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.471 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:39.729 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.729 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:39.729 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:39.729 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.729 06:09:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:39.988 06:09:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.988 06:09:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:39.988 06:09:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:39.988 06:09:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.988 06:09:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.263 06:09:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.263 06:09:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:40.263 06:09:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:40.263 06:09:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.263 06:09:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.520 06:09:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.521 06:09:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:40.521 06:09:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:40.521 06:09:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.521 06:09:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.086 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.086 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:41.086 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:41.086 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.086 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.345 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.345 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:41.345 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:41.345 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.345 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.604 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:41.604 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.605 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 134595 00:17:41.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (134595) - No such process 00:17:41.605 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 134595 00:17:41.605 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:41.605 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:41.605 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:41.605 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:41.605 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:17:41.605 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:41.605 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:17:41.605 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:41.605 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:41.605 rmmod nvme_tcp 00:17:41.605 rmmod nvme_fabrics 00:17:41.605 rmmod nvme_keyring 00:17:41.605 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:41.605 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:17:41.605 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:17:41.605 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 134440 ']' 00:17:41.605 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 134440 00:17:41.605 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 134440 ']' 00:17:41.605 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 134440 00:17:41.605 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:17:41.605 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:41.605 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 134440 00:17:41.605 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:41.605 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:41.605 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 134440' 00:17:41.605 killing process with pid 134440 00:17:41.605 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 134440 00:17:41.605 06:09:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 134440 00:17:42.977 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:17:42.977 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:42.977 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:42.977 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:42.977 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:42.978 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:42.978 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.978 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:42.978 06:09:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.884 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:44.884 00:17:44.884 real 0m17.002s 00:17:44.884 user 0m42.663s 00:17:44.884 sys 0m5.687s 00:17:44.884 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:44.884 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.884 ************************************ 00:17:44.884 END TEST nvmf_connect_stress 00:17:44.884 ************************************ 00:17:44.884 06:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:44.884 06:09:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:44.884 06:09:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:44.884 06:09:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:45.142 ************************************ 00:17:45.142 START TEST nvmf_fused_ordering 00:17:45.142 ************************************ 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:45.142 * Looking for test storage... 00:17:45.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.142 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:17:45.143 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:45.143 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:45.143 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:45.143 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.143 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.143 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:45.143 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:45.143 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:45.143 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:45.143 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:45.143 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.143 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:45.143 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:45.143 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:45.143 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.143 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:45.143 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.143 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:45.143 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:45.143 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:17:45.143 06:09:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:47.680 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:47.680 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:17:47.680 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:47.681 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:47.681 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:47.681 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:47.681 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:47.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:47.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:17:47.681 00:17:47.681 --- 10.0.0.2 ping statistics --- 00:17:47.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.681 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:47.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:47.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:17:47.681 00:17:47.681 --- 10.0.0.1 ping statistics --- 00:17:47.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.681 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:17:47.681 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:47.682 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:47.682 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:47.682 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:47.682 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:47.682 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:47.682 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:47.682 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:47.682 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:47.682 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:47.682 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:47.682 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=137867 00:17:47.682 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:47.682 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 137867 00:17:47.682 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 137867 ']' 00:17:47.682 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.682 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:47.682 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.682 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:47.682 06:09:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:47.682 [2024-07-26 06:09:58.661640] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:47.682 [2024-07-26 06:09:58.661767] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.682 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.682 [2024-07-26 06:09:58.797426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.941 [2024-07-26 06:09:59.059572] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.941 [2024-07-26 06:09:59.059661] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.941 [2024-07-26 06:09:59.059691] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.941 [2024-07-26 06:09:59.059717] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.941 [2024-07-26 06:09:59.059740] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.941 [2024-07-26 06:09:59.059798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:48.511 [2024-07-26 06:09:59.698766] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:48.511 [2024-07-26 06:09:59.715041] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:48.511 NULL1 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.511 06:09:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:48.511 [2024-07-26 06:09:59.789831] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:48.511 [2024-07-26 06:09:59.789945] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138016 ] 00:17:48.769 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.335 Attached to nqn.2016-06.io.spdk:cnode1 00:17:49.335 Namespace ID: 1 size: 1GB 00:17:49.335 fused_ordering(0) 00:17:49.335 fused_ordering(1) 00:17:49.335 fused_ordering(2) 00:17:49.335 fused_ordering(3) 00:17:49.335 fused_ordering(4) 00:17:49.335 fused_ordering(5) 00:17:49.335 fused_ordering(6) 00:17:49.335 fused_ordering(7) 00:17:49.335 fused_ordering(8) 00:17:49.335 fused_ordering(9) 00:17:49.335 fused_ordering(10) 00:17:49.335 fused_ordering(11) 00:17:49.335 fused_ordering(12) 00:17:49.335 fused_ordering(13) 00:17:49.335 fused_ordering(14) 00:17:49.335 fused_ordering(15) 00:17:49.335 fused_ordering(16) 00:17:49.335 fused_ordering(17) 00:17:49.335 fused_ordering(18) 00:17:49.335 fused_ordering(19) 00:17:49.335 fused_ordering(20) 00:17:49.335 fused_ordering(21) 00:17:49.335 fused_ordering(22) 00:17:49.335 fused_ordering(23) 00:17:49.335 fused_ordering(24) 00:17:49.335 fused_ordering(25) 00:17:49.335 fused_ordering(26) 00:17:49.335 fused_ordering(27) 00:17:49.335 fused_ordering(28) 00:17:49.335 fused_ordering(29) 00:17:49.335 fused_ordering(30) 00:17:49.335 fused_ordering(31) 00:17:49.335 fused_ordering(32) 00:17:49.335 fused_ordering(33) 00:17:49.335 fused_ordering(34) 00:17:49.335 fused_ordering(35) 00:17:49.335 fused_ordering(36) 00:17:49.335 fused_ordering(37) 00:17:49.335 fused_ordering(38) 00:17:49.335 fused_ordering(39) 00:17:49.335 fused_ordering(40) 00:17:49.335 fused_ordering(41) 00:17:49.335 fused_ordering(42) 00:17:49.335 fused_ordering(43) 00:17:49.335 fused_ordering(44) 00:17:49.335 fused_ordering(45) 00:17:49.335 fused_ordering(46) 00:17:49.335 fused_ordering(47) 00:17:49.335 fused_ordering(48) 00:17:49.335 fused_ordering(49) 00:17:49.335 fused_ordering(50) 00:17:49.335 fused_ordering(51) 00:17:49.335 fused_ordering(52) 00:17:49.335 fused_ordering(53) 00:17:49.335 fused_ordering(54) 00:17:49.335 fused_ordering(55) 00:17:49.335 fused_ordering(56) 00:17:49.335 fused_ordering(57) 00:17:49.335 fused_ordering(58) 00:17:49.335 fused_ordering(59) 00:17:49.335 fused_ordering(60) 00:17:49.335 fused_ordering(61) 00:17:49.335 fused_ordering(62) 00:17:49.335 fused_ordering(63) 00:17:49.335 fused_ordering(64) 00:17:49.335 fused_ordering(65) 00:17:49.335 fused_ordering(66) 00:17:49.335 fused_ordering(67) 00:17:49.335 fused_ordering(68) 00:17:49.335 fused_ordering(69) 00:17:49.335 fused_ordering(70) 00:17:49.335 fused_ordering(71) 00:17:49.335 fused_ordering(72) 00:17:49.335 fused_ordering(73) 00:17:49.335 fused_ordering(74) 00:17:49.335 fused_ordering(75) 00:17:49.335 fused_ordering(76) 00:17:49.335 fused_ordering(77) 00:17:49.335 fused_ordering(78) 00:17:49.335 fused_ordering(79) 00:17:49.335 fused_ordering(80) 00:17:49.335 fused_ordering(81) 00:17:49.335 fused_ordering(82) 00:17:49.335 fused_ordering(83) 00:17:49.335 fused_ordering(84) 00:17:49.335 fused_ordering(85) 00:17:49.335 fused_ordering(86) 00:17:49.335 fused_ordering(87) 00:17:49.335 fused_ordering(88) 00:17:49.335 fused_ordering(89) 00:17:49.335 fused_ordering(90) 00:17:49.335 fused_ordering(91) 00:17:49.335 fused_ordering(92) 00:17:49.335 fused_ordering(93) 00:17:49.335 fused_ordering(94) 00:17:49.335 fused_ordering(95) 00:17:49.335 fused_ordering(96) 00:17:49.335 fused_ordering(97) 00:17:49.335 fused_ordering(98) 00:17:49.335 fused_ordering(99) 00:17:49.335 fused_ordering(100) 00:17:49.335 fused_ordering(101) 00:17:49.335 fused_ordering(102) 00:17:49.335 fused_ordering(103) 00:17:49.335 fused_ordering(104) 00:17:49.335 fused_ordering(105) 00:17:49.335 fused_ordering(106) 00:17:49.335 fused_ordering(107) 00:17:49.335 fused_ordering(108) 00:17:49.335 fused_ordering(109) 00:17:49.335 fused_ordering(110) 00:17:49.335 fused_ordering(111) 00:17:49.335 fused_ordering(112) 00:17:49.335 fused_ordering(113) 00:17:49.335 fused_ordering(114) 00:17:49.335 fused_ordering(115) 00:17:49.335 fused_ordering(116) 00:17:49.335 fused_ordering(117) 00:17:49.335 fused_ordering(118) 00:17:49.335 fused_ordering(119) 00:17:49.335 fused_ordering(120) 00:17:49.335 fused_ordering(121) 00:17:49.335 fused_ordering(122) 00:17:49.335 fused_ordering(123) 00:17:49.335 fused_ordering(124) 00:17:49.335 fused_ordering(125) 00:17:49.335 fused_ordering(126) 00:17:49.335 fused_ordering(127) 00:17:49.335 fused_ordering(128) 00:17:49.335 fused_ordering(129) 00:17:49.335 fused_ordering(130) 00:17:49.335 fused_ordering(131) 00:17:49.335 fused_ordering(132) 00:17:49.335 fused_ordering(133) 00:17:49.335 fused_ordering(134) 00:17:49.335 fused_ordering(135) 00:17:49.335 fused_ordering(136) 00:17:49.335 fused_ordering(137) 00:17:49.335 fused_ordering(138) 00:17:49.335 fused_ordering(139) 00:17:49.335 fused_ordering(140) 00:17:49.335 fused_ordering(141) 00:17:49.335 fused_ordering(142) 00:17:49.335 fused_ordering(143) 00:17:49.335 fused_ordering(144) 00:17:49.335 fused_ordering(145) 00:17:49.335 fused_ordering(146) 00:17:49.335 fused_ordering(147) 00:17:49.335 fused_ordering(148) 00:17:49.335 fused_ordering(149) 00:17:49.335 fused_ordering(150) 00:17:49.335 fused_ordering(151) 00:17:49.335 fused_ordering(152) 00:17:49.335 fused_ordering(153) 00:17:49.335 fused_ordering(154) 00:17:49.335 fused_ordering(155) 00:17:49.335 fused_ordering(156) 00:17:49.335 fused_ordering(157) 00:17:49.335 fused_ordering(158) 00:17:49.335 fused_ordering(159) 00:17:49.335 fused_ordering(160) 00:17:49.335 fused_ordering(161) 00:17:49.335 fused_ordering(162) 00:17:49.335 fused_ordering(163) 00:17:49.335 fused_ordering(164) 00:17:49.335 fused_ordering(165) 00:17:49.335 fused_ordering(166) 00:17:49.335 fused_ordering(167) 00:17:49.335 fused_ordering(168) 00:17:49.335 fused_ordering(169) 00:17:49.335 fused_ordering(170) 00:17:49.335 fused_ordering(171) 00:17:49.335 fused_ordering(172) 00:17:49.335 fused_ordering(173) 00:17:49.335 fused_ordering(174) 00:17:49.335 fused_ordering(175) 00:17:49.335 fused_ordering(176) 00:17:49.335 fused_ordering(177) 00:17:49.335 fused_ordering(178) 00:17:49.335 fused_ordering(179) 00:17:49.335 fused_ordering(180) 00:17:49.335 fused_ordering(181) 00:17:49.335 fused_ordering(182) 00:17:49.335 fused_ordering(183) 00:17:49.335 fused_ordering(184) 00:17:49.335 fused_ordering(185) 00:17:49.335 fused_ordering(186) 00:17:49.335 fused_ordering(187) 00:17:49.335 fused_ordering(188) 00:17:49.335 fused_ordering(189) 00:17:49.335 fused_ordering(190) 00:17:49.335 fused_ordering(191) 00:17:49.335 fused_ordering(192) 00:17:49.335 fused_ordering(193) 00:17:49.335 fused_ordering(194) 00:17:49.335 fused_ordering(195) 00:17:49.336 fused_ordering(196) 00:17:49.336 fused_ordering(197) 00:17:49.336 fused_ordering(198) 00:17:49.336 fused_ordering(199) 00:17:49.336 fused_ordering(200) 00:17:49.336 fused_ordering(201) 00:17:49.336 fused_ordering(202) 00:17:49.336 fused_ordering(203) 00:17:49.336 fused_ordering(204) 00:17:49.336 fused_ordering(205) 00:17:49.595 fused_ordering(206) 00:17:49.595 fused_ordering(207) 00:17:49.595 fused_ordering(208) 00:17:49.595 fused_ordering(209) 00:17:49.595 fused_ordering(210) 00:17:49.595 fused_ordering(211) 00:17:49.595 fused_ordering(212) 00:17:49.595 fused_ordering(213) 00:17:49.595 fused_ordering(214) 00:17:49.595 fused_ordering(215) 00:17:49.595 fused_ordering(216) 00:17:49.595 fused_ordering(217) 00:17:49.595 fused_ordering(218) 00:17:49.595 fused_ordering(219) 00:17:49.595 fused_ordering(220) 00:17:49.595 fused_ordering(221) 00:17:49.595 fused_ordering(222) 00:17:49.595 fused_ordering(223) 00:17:49.595 fused_ordering(224) 00:17:49.595 fused_ordering(225) 00:17:49.595 fused_ordering(226) 00:17:49.595 fused_ordering(227) 00:17:49.595 fused_ordering(228) 00:17:49.595 fused_ordering(229) 00:17:49.595 fused_ordering(230) 00:17:49.595 fused_ordering(231) 00:17:49.595 fused_ordering(232) 00:17:49.595 fused_ordering(233) 00:17:49.595 fused_ordering(234) 00:17:49.595 fused_ordering(235) 00:17:49.595 fused_ordering(236) 00:17:49.595 fused_ordering(237) 00:17:49.595 fused_ordering(238) 00:17:49.595 fused_ordering(239) 00:17:49.595 fused_ordering(240) 00:17:49.595 fused_ordering(241) 00:17:49.595 fused_ordering(242) 00:17:49.595 fused_ordering(243) 00:17:49.595 fused_ordering(244) 00:17:49.595 fused_ordering(245) 00:17:49.595 fused_ordering(246) 00:17:49.595 fused_ordering(247) 00:17:49.595 fused_ordering(248) 00:17:49.595 fused_ordering(249) 00:17:49.595 fused_ordering(250) 00:17:49.595 fused_ordering(251) 00:17:49.595 fused_ordering(252) 00:17:49.595 fused_ordering(253) 00:17:49.595 fused_ordering(254) 00:17:49.595 fused_ordering(255) 00:17:49.595 fused_ordering(256) 00:17:49.596 fused_ordering(257) 00:17:49.596 fused_ordering(258) 00:17:49.596 fused_ordering(259) 00:17:49.596 fused_ordering(260) 00:17:49.596 fused_ordering(261) 00:17:49.596 fused_ordering(262) 00:17:49.596 fused_ordering(263) 00:17:49.596 fused_ordering(264) 00:17:49.596 fused_ordering(265) 00:17:49.596 fused_ordering(266) 00:17:49.596 fused_ordering(267) 00:17:49.596 fused_ordering(268) 00:17:49.596 fused_ordering(269) 00:17:49.596 fused_ordering(270) 00:17:49.596 fused_ordering(271) 00:17:49.596 fused_ordering(272) 00:17:49.596 fused_ordering(273) 00:17:49.596 fused_ordering(274) 00:17:49.596 fused_ordering(275) 00:17:49.596 fused_ordering(276) 00:17:49.596 fused_ordering(277) 00:17:49.596 fused_ordering(278) 00:17:49.596 fused_ordering(279) 00:17:49.596 fused_ordering(280) 00:17:49.596 fused_ordering(281) 00:17:49.596 fused_ordering(282) 00:17:49.596 fused_ordering(283) 00:17:49.596 fused_ordering(284) 00:17:49.596 fused_ordering(285) 00:17:49.596 fused_ordering(286) 00:17:49.596 fused_ordering(287) 00:17:49.596 fused_ordering(288) 00:17:49.596 fused_ordering(289) 00:17:49.596 fused_ordering(290) 00:17:49.596 fused_ordering(291) 00:17:49.596 fused_ordering(292) 00:17:49.596 fused_ordering(293) 00:17:49.596 fused_ordering(294) 00:17:49.596 fused_ordering(295) 00:17:49.596 fused_ordering(296) 00:17:49.596 fused_ordering(297) 00:17:49.596 fused_ordering(298) 00:17:49.596 fused_ordering(299) 00:17:49.596 fused_ordering(300) 00:17:49.596 fused_ordering(301) 00:17:49.596 fused_ordering(302) 00:17:49.596 fused_ordering(303) 00:17:49.596 fused_ordering(304) 00:17:49.596 fused_ordering(305) 00:17:49.596 fused_ordering(306) 00:17:49.596 fused_ordering(307) 00:17:49.596 fused_ordering(308) 00:17:49.596 fused_ordering(309) 00:17:49.596 fused_ordering(310) 00:17:49.596 fused_ordering(311) 00:17:49.596 fused_ordering(312) 00:17:49.596 fused_ordering(313) 00:17:49.596 fused_ordering(314) 00:17:49.596 fused_ordering(315) 00:17:49.596 fused_ordering(316) 00:17:49.596 fused_ordering(317) 00:17:49.596 fused_ordering(318) 00:17:49.596 fused_ordering(319) 00:17:49.596 fused_ordering(320) 00:17:49.596 fused_ordering(321) 00:17:49.596 fused_ordering(322) 00:17:49.596 fused_ordering(323) 00:17:49.596 fused_ordering(324) 00:17:49.596 fused_ordering(325) 00:17:49.596 fused_ordering(326) 00:17:49.596 fused_ordering(327) 00:17:49.596 fused_ordering(328) 00:17:49.596 fused_ordering(329) 00:17:49.596 fused_ordering(330) 00:17:49.596 fused_ordering(331) 00:17:49.596 fused_ordering(332) 00:17:49.596 fused_ordering(333) 00:17:49.596 fused_ordering(334) 00:17:49.596 fused_ordering(335) 00:17:49.596 fused_ordering(336) 00:17:49.596 fused_ordering(337) 00:17:49.596 fused_ordering(338) 00:17:49.596 fused_ordering(339) 00:17:49.596 fused_ordering(340) 00:17:49.596 fused_ordering(341) 00:17:49.596 fused_ordering(342) 00:17:49.596 fused_ordering(343) 00:17:49.596 fused_ordering(344) 00:17:49.596 fused_ordering(345) 00:17:49.596 fused_ordering(346) 00:17:49.596 fused_ordering(347) 00:17:49.596 fused_ordering(348) 00:17:49.596 fused_ordering(349) 00:17:49.596 fused_ordering(350) 00:17:49.596 fused_ordering(351) 00:17:49.596 fused_ordering(352) 00:17:49.596 fused_ordering(353) 00:17:49.596 fused_ordering(354) 00:17:49.596 fused_ordering(355) 00:17:49.596 fused_ordering(356) 00:17:49.596 fused_ordering(357) 00:17:49.596 fused_ordering(358) 00:17:49.596 fused_ordering(359) 00:17:49.596 fused_ordering(360) 00:17:49.596 fused_ordering(361) 00:17:49.596 fused_ordering(362) 00:17:49.596 fused_ordering(363) 00:17:49.596 fused_ordering(364) 00:17:49.596 fused_ordering(365) 00:17:49.596 fused_ordering(366) 00:17:49.596 fused_ordering(367) 00:17:49.596 fused_ordering(368) 00:17:49.596 fused_ordering(369) 00:17:49.596 fused_ordering(370) 00:17:49.596 fused_ordering(371) 00:17:49.596 fused_ordering(372) 00:17:49.596 fused_ordering(373) 00:17:49.596 fused_ordering(374) 00:17:49.596 fused_ordering(375) 00:17:49.596 fused_ordering(376) 00:17:49.596 fused_ordering(377) 00:17:49.596 fused_ordering(378) 00:17:49.596 fused_ordering(379) 00:17:49.596 fused_ordering(380) 00:17:49.596 fused_ordering(381) 00:17:49.596 fused_ordering(382) 00:17:49.596 fused_ordering(383) 00:17:49.596 fused_ordering(384) 00:17:49.596 fused_ordering(385) 00:17:49.596 fused_ordering(386) 00:17:49.596 fused_ordering(387) 00:17:49.596 fused_ordering(388) 00:17:49.596 fused_ordering(389) 00:17:49.596 fused_ordering(390) 00:17:49.596 fused_ordering(391) 00:17:49.596 fused_ordering(392) 00:17:49.596 fused_ordering(393) 00:17:49.596 fused_ordering(394) 00:17:49.596 fused_ordering(395) 00:17:49.596 fused_ordering(396) 00:17:49.596 fused_ordering(397) 00:17:49.596 fused_ordering(398) 00:17:49.596 fused_ordering(399) 00:17:49.596 fused_ordering(400) 00:17:49.596 fused_ordering(401) 00:17:49.596 fused_ordering(402) 00:17:49.596 fused_ordering(403) 00:17:49.596 fused_ordering(404) 00:17:49.596 fused_ordering(405) 00:17:49.596 fused_ordering(406) 00:17:49.596 fused_ordering(407) 00:17:49.596 fused_ordering(408) 00:17:49.596 fused_ordering(409) 00:17:49.596 fused_ordering(410) 00:17:50.162 fused_ordering(411) 00:17:50.162 fused_ordering(412) 00:17:50.162 fused_ordering(413) 00:17:50.162 fused_ordering(414) 00:17:50.162 fused_ordering(415) 00:17:50.162 fused_ordering(416) 00:17:50.162 fused_ordering(417) 00:17:50.162 fused_ordering(418) 00:17:50.162 fused_ordering(419) 00:17:50.162 fused_ordering(420) 00:17:50.162 fused_ordering(421) 00:17:50.162 fused_ordering(422) 00:17:50.162 fused_ordering(423) 00:17:50.162 fused_ordering(424) 00:17:50.162 fused_ordering(425) 00:17:50.162 fused_ordering(426) 00:17:50.162 fused_ordering(427) 00:17:50.162 fused_ordering(428) 00:17:50.162 fused_ordering(429) 00:17:50.162 fused_ordering(430) 00:17:50.162 fused_ordering(431) 00:17:50.162 fused_ordering(432) 00:17:50.162 fused_ordering(433) 00:17:50.162 fused_ordering(434) 00:17:50.162 fused_ordering(435) 00:17:50.162 fused_ordering(436) 00:17:50.162 fused_ordering(437) 00:17:50.162 fused_ordering(438) 00:17:50.162 fused_ordering(439) 00:17:50.162 fused_ordering(440) 00:17:50.162 fused_ordering(441) 00:17:50.162 fused_ordering(442) 00:17:50.162 fused_ordering(443) 00:17:50.162 fused_ordering(444) 00:17:50.162 fused_ordering(445) 00:17:50.162 fused_ordering(446) 00:17:50.162 fused_ordering(447) 00:17:50.162 fused_ordering(448) 00:17:50.162 fused_ordering(449) 00:17:50.162 fused_ordering(450) 00:17:50.162 fused_ordering(451) 00:17:50.162 fused_ordering(452) 00:17:50.162 fused_ordering(453) 00:17:50.162 fused_ordering(454) 00:17:50.162 fused_ordering(455) 00:17:50.162 fused_ordering(456) 00:17:50.162 fused_ordering(457) 00:17:50.162 fused_ordering(458) 00:17:50.162 fused_ordering(459) 00:17:50.162 fused_ordering(460) 00:17:50.162 fused_ordering(461) 00:17:50.162 fused_ordering(462) 00:17:50.162 fused_ordering(463) 00:17:50.162 fused_ordering(464) 00:17:50.162 fused_ordering(465) 00:17:50.162 fused_ordering(466) 00:17:50.162 fused_ordering(467) 00:17:50.162 fused_ordering(468) 00:17:50.162 fused_ordering(469) 00:17:50.162 fused_ordering(470) 00:17:50.162 fused_ordering(471) 00:17:50.162 fused_ordering(472) 00:17:50.162 fused_ordering(473) 00:17:50.162 fused_ordering(474) 00:17:50.162 fused_ordering(475) 00:17:50.162 fused_ordering(476) 00:17:50.162 fused_ordering(477) 00:17:50.162 fused_ordering(478) 00:17:50.162 fused_ordering(479) 00:17:50.162 fused_ordering(480) 00:17:50.162 fused_ordering(481) 00:17:50.162 fused_ordering(482) 00:17:50.162 fused_ordering(483) 00:17:50.162 fused_ordering(484) 00:17:50.162 fused_ordering(485) 00:17:50.162 fused_ordering(486) 00:17:50.162 fused_ordering(487) 00:17:50.163 fused_ordering(488) 00:17:50.163 fused_ordering(489) 00:17:50.163 fused_ordering(490) 00:17:50.163 fused_ordering(491) 00:17:50.163 fused_ordering(492) 00:17:50.163 fused_ordering(493) 00:17:50.163 fused_ordering(494) 00:17:50.163 fused_ordering(495) 00:17:50.163 fused_ordering(496) 00:17:50.163 fused_ordering(497) 00:17:50.163 fused_ordering(498) 00:17:50.163 fused_ordering(499) 00:17:50.163 fused_ordering(500) 00:17:50.163 fused_ordering(501) 00:17:50.163 fused_ordering(502) 00:17:50.163 fused_ordering(503) 00:17:50.163 fused_ordering(504) 00:17:50.163 fused_ordering(505) 00:17:50.163 fused_ordering(506) 00:17:50.163 fused_ordering(507) 00:17:50.163 fused_ordering(508) 00:17:50.163 fused_ordering(509) 00:17:50.163 fused_ordering(510) 00:17:50.163 fused_ordering(511) 00:17:50.163 fused_ordering(512) 00:17:50.163 fused_ordering(513) 00:17:50.163 fused_ordering(514) 00:17:50.163 fused_ordering(515) 00:17:50.163 fused_ordering(516) 00:17:50.163 fused_ordering(517) 00:17:50.163 fused_ordering(518) 00:17:50.163 fused_ordering(519) 00:17:50.163 fused_ordering(520) 00:17:50.163 fused_ordering(521) 00:17:50.163 fused_ordering(522) 00:17:50.163 fused_ordering(523) 00:17:50.163 fused_ordering(524) 00:17:50.163 fused_ordering(525) 00:17:50.163 fused_ordering(526) 00:17:50.163 fused_ordering(527) 00:17:50.163 fused_ordering(528) 00:17:50.163 fused_ordering(529) 00:17:50.163 fused_ordering(530) 00:17:50.163 fused_ordering(531) 00:17:50.163 fused_ordering(532) 00:17:50.163 fused_ordering(533) 00:17:50.163 fused_ordering(534) 00:17:50.163 fused_ordering(535) 00:17:50.163 fused_ordering(536) 00:17:50.163 fused_ordering(537) 00:17:50.163 fused_ordering(538) 00:17:50.163 fused_ordering(539) 00:17:50.163 fused_ordering(540) 00:17:50.163 fused_ordering(541) 00:17:50.163 fused_ordering(542) 00:17:50.163 fused_ordering(543) 00:17:50.163 fused_ordering(544) 00:17:50.163 fused_ordering(545) 00:17:50.163 fused_ordering(546) 00:17:50.163 fused_ordering(547) 00:17:50.163 fused_ordering(548) 00:17:50.163 fused_ordering(549) 00:17:50.163 fused_ordering(550) 00:17:50.163 fused_ordering(551) 00:17:50.163 fused_ordering(552) 00:17:50.163 fused_ordering(553) 00:17:50.163 fused_ordering(554) 00:17:50.163 fused_ordering(555) 00:17:50.163 fused_ordering(556) 00:17:50.163 fused_ordering(557) 00:17:50.163 fused_ordering(558) 00:17:50.163 fused_ordering(559) 00:17:50.163 fused_ordering(560) 00:17:50.163 fused_ordering(561) 00:17:50.163 fused_ordering(562) 00:17:50.163 fused_ordering(563) 00:17:50.163 fused_ordering(564) 00:17:50.163 fused_ordering(565) 00:17:50.163 fused_ordering(566) 00:17:50.163 fused_ordering(567) 00:17:50.163 fused_ordering(568) 00:17:50.163 fused_ordering(569) 00:17:50.163 fused_ordering(570) 00:17:50.163 fused_ordering(571) 00:17:50.163 fused_ordering(572) 00:17:50.163 fused_ordering(573) 00:17:50.163 fused_ordering(574) 00:17:50.163 fused_ordering(575) 00:17:50.163 fused_ordering(576) 00:17:50.163 fused_ordering(577) 00:17:50.163 fused_ordering(578) 00:17:50.163 fused_ordering(579) 00:17:50.163 fused_ordering(580) 00:17:50.163 fused_ordering(581) 00:17:50.163 fused_ordering(582) 00:17:50.163 fused_ordering(583) 00:17:50.163 fused_ordering(584) 00:17:50.163 fused_ordering(585) 00:17:50.163 fused_ordering(586) 00:17:50.163 fused_ordering(587) 00:17:50.163 fused_ordering(588) 00:17:50.163 fused_ordering(589) 00:17:50.163 fused_ordering(590) 00:17:50.163 fused_ordering(591) 00:17:50.163 fused_ordering(592) 00:17:50.163 fused_ordering(593) 00:17:50.163 fused_ordering(594) 00:17:50.163 fused_ordering(595) 00:17:50.163 fused_ordering(596) 00:17:50.163 fused_ordering(597) 00:17:50.163 fused_ordering(598) 00:17:50.163 fused_ordering(599) 00:17:50.163 fused_ordering(600) 00:17:50.163 fused_ordering(601) 00:17:50.163 fused_ordering(602) 00:17:50.163 fused_ordering(603) 00:17:50.163 fused_ordering(604) 00:17:50.163 fused_ordering(605) 00:17:50.163 fused_ordering(606) 00:17:50.163 fused_ordering(607) 00:17:50.163 fused_ordering(608) 00:17:50.163 fused_ordering(609) 00:17:50.163 fused_ordering(610) 00:17:50.163 fused_ordering(611) 00:17:50.163 fused_ordering(612) 00:17:50.163 fused_ordering(613) 00:17:50.163 fused_ordering(614) 00:17:50.163 fused_ordering(615) 00:17:51.094 fused_ordering(616) 00:17:51.094 fused_ordering(617) 00:17:51.094 fused_ordering(618) 00:17:51.094 fused_ordering(619) 00:17:51.094 fused_ordering(620) 00:17:51.094 fused_ordering(621) 00:17:51.094 fused_ordering(622) 00:17:51.094 fused_ordering(623) 00:17:51.094 fused_ordering(624) 00:17:51.094 fused_ordering(625) 00:17:51.094 fused_ordering(626) 00:17:51.094 fused_ordering(627) 00:17:51.094 fused_ordering(628) 00:17:51.094 fused_ordering(629) 00:17:51.094 fused_ordering(630) 00:17:51.094 fused_ordering(631) 00:17:51.094 fused_ordering(632) 00:17:51.094 fused_ordering(633) 00:17:51.094 fused_ordering(634) 00:17:51.094 fused_ordering(635) 00:17:51.094 fused_ordering(636) 00:17:51.094 fused_ordering(637) 00:17:51.094 fused_ordering(638) 00:17:51.094 fused_ordering(639) 00:17:51.094 fused_ordering(640) 00:17:51.094 fused_ordering(641) 00:17:51.094 fused_ordering(642) 00:17:51.094 fused_ordering(643) 00:17:51.094 fused_ordering(644) 00:17:51.094 fused_ordering(645) 00:17:51.094 fused_ordering(646) 00:17:51.094 fused_ordering(647) 00:17:51.094 fused_ordering(648) 00:17:51.094 fused_ordering(649) 00:17:51.094 fused_ordering(650) 00:17:51.094 fused_ordering(651) 00:17:51.094 fused_ordering(652) 00:17:51.094 fused_ordering(653) 00:17:51.094 fused_ordering(654) 00:17:51.094 fused_ordering(655) 00:17:51.094 fused_ordering(656) 00:17:51.094 fused_ordering(657) 00:17:51.094 fused_ordering(658) 00:17:51.094 fused_ordering(659) 00:17:51.094 fused_ordering(660) 00:17:51.094 fused_ordering(661) 00:17:51.094 fused_ordering(662) 00:17:51.094 fused_ordering(663) 00:17:51.094 fused_ordering(664) 00:17:51.094 fused_ordering(665) 00:17:51.094 fused_ordering(666) 00:17:51.094 fused_ordering(667) 00:17:51.094 fused_ordering(668) 00:17:51.094 fused_ordering(669) 00:17:51.094 fused_ordering(670) 00:17:51.094 fused_ordering(671) 00:17:51.094 fused_ordering(672) 00:17:51.094 fused_ordering(673) 00:17:51.094 fused_ordering(674) 00:17:51.094 fused_ordering(675) 00:17:51.094 fused_ordering(676) 00:17:51.094 fused_ordering(677) 00:17:51.094 fused_ordering(678) 00:17:51.094 fused_ordering(679) 00:17:51.095 fused_ordering(680) 00:17:51.095 fused_ordering(681) 00:17:51.095 fused_ordering(682) 00:17:51.095 fused_ordering(683) 00:17:51.095 fused_ordering(684) 00:17:51.095 fused_ordering(685) 00:17:51.095 fused_ordering(686) 00:17:51.095 fused_ordering(687) 00:17:51.095 fused_ordering(688) 00:17:51.095 fused_ordering(689) 00:17:51.095 fused_ordering(690) 00:17:51.095 fused_ordering(691) 00:17:51.095 fused_ordering(692) 00:17:51.095 fused_ordering(693) 00:17:51.095 fused_ordering(694) 00:17:51.095 fused_ordering(695) 00:17:51.095 fused_ordering(696) 00:17:51.095 fused_ordering(697) 00:17:51.095 fused_ordering(698) 00:17:51.095 fused_ordering(699) 00:17:51.095 fused_ordering(700) 00:17:51.095 fused_ordering(701) 00:17:51.095 fused_ordering(702) 00:17:51.095 fused_ordering(703) 00:17:51.095 fused_ordering(704) 00:17:51.095 fused_ordering(705) 00:17:51.095 fused_ordering(706) 00:17:51.095 fused_ordering(707) 00:17:51.095 fused_ordering(708) 00:17:51.095 fused_ordering(709) 00:17:51.095 fused_ordering(710) 00:17:51.095 fused_ordering(711) 00:17:51.095 fused_ordering(712) 00:17:51.095 fused_ordering(713) 00:17:51.095 fused_ordering(714) 00:17:51.095 fused_ordering(715) 00:17:51.095 fused_ordering(716) 00:17:51.095 fused_ordering(717) 00:17:51.095 fused_ordering(718) 00:17:51.095 fused_ordering(719) 00:17:51.095 fused_ordering(720) 00:17:51.095 fused_ordering(721) 00:17:51.095 fused_ordering(722) 00:17:51.095 fused_ordering(723) 00:17:51.095 fused_ordering(724) 00:17:51.095 fused_ordering(725) 00:17:51.095 fused_ordering(726) 00:17:51.095 fused_ordering(727) 00:17:51.095 fused_ordering(728) 00:17:51.095 fused_ordering(729) 00:17:51.095 fused_ordering(730) 00:17:51.095 fused_ordering(731) 00:17:51.095 fused_ordering(732) 00:17:51.095 fused_ordering(733) 00:17:51.095 fused_ordering(734) 00:17:51.095 fused_ordering(735) 00:17:51.095 fused_ordering(736) 00:17:51.095 fused_ordering(737) 00:17:51.095 fused_ordering(738) 00:17:51.095 fused_ordering(739) 00:17:51.095 fused_ordering(740) 00:17:51.095 fused_ordering(741) 00:17:51.095 fused_ordering(742) 00:17:51.095 fused_ordering(743) 00:17:51.095 fused_ordering(744) 00:17:51.095 fused_ordering(745) 00:17:51.095 fused_ordering(746) 00:17:51.095 fused_ordering(747) 00:17:51.095 fused_ordering(748) 00:17:51.095 fused_ordering(749) 00:17:51.095 fused_ordering(750) 00:17:51.095 fused_ordering(751) 00:17:51.095 fused_ordering(752) 00:17:51.095 fused_ordering(753) 00:17:51.095 fused_ordering(754) 00:17:51.095 fused_ordering(755) 00:17:51.095 fused_ordering(756) 00:17:51.095 fused_ordering(757) 00:17:51.095 fused_ordering(758) 00:17:51.095 fused_ordering(759) 00:17:51.095 fused_ordering(760) 00:17:51.095 fused_ordering(761) 00:17:51.095 fused_ordering(762) 00:17:51.095 fused_ordering(763) 00:17:51.095 fused_ordering(764) 00:17:51.095 fused_ordering(765) 00:17:51.095 fused_ordering(766) 00:17:51.095 fused_ordering(767) 00:17:51.095 fused_ordering(768) 00:17:51.095 fused_ordering(769) 00:17:51.095 fused_ordering(770) 00:17:51.095 fused_ordering(771) 00:17:51.095 fused_ordering(772) 00:17:51.095 fused_ordering(773) 00:17:51.095 fused_ordering(774) 00:17:51.095 fused_ordering(775) 00:17:51.095 fused_ordering(776) 00:17:51.095 fused_ordering(777) 00:17:51.095 fused_ordering(778) 00:17:51.095 fused_ordering(779) 00:17:51.095 fused_ordering(780) 00:17:51.095 fused_ordering(781) 00:17:51.095 fused_ordering(782) 00:17:51.095 fused_ordering(783) 00:17:51.095 fused_ordering(784) 00:17:51.095 fused_ordering(785) 00:17:51.095 fused_ordering(786) 00:17:51.095 fused_ordering(787) 00:17:51.095 fused_ordering(788) 00:17:51.095 fused_ordering(789) 00:17:51.095 fused_ordering(790) 00:17:51.095 fused_ordering(791) 00:17:51.095 fused_ordering(792) 00:17:51.095 fused_ordering(793) 00:17:51.095 fused_ordering(794) 00:17:51.095 fused_ordering(795) 00:17:51.095 fused_ordering(796) 00:17:51.095 fused_ordering(797) 00:17:51.095 fused_ordering(798) 00:17:51.095 fused_ordering(799) 00:17:51.095 fused_ordering(800) 00:17:51.095 fused_ordering(801) 00:17:51.095 fused_ordering(802) 00:17:51.095 fused_ordering(803) 00:17:51.095 fused_ordering(804) 00:17:51.095 fused_ordering(805) 00:17:51.095 fused_ordering(806) 00:17:51.095 fused_ordering(807) 00:17:51.095 fused_ordering(808) 00:17:51.095 fused_ordering(809) 00:17:51.095 fused_ordering(810) 00:17:51.095 fused_ordering(811) 00:17:51.095 fused_ordering(812) 00:17:51.095 fused_ordering(813) 00:17:51.095 fused_ordering(814) 00:17:51.095 fused_ordering(815) 00:17:51.095 fused_ordering(816) 00:17:51.095 fused_ordering(817) 00:17:51.095 fused_ordering(818) 00:17:51.095 fused_ordering(819) 00:17:51.095 fused_ordering(820) 00:17:52.032 fused_ordering(821) 00:17:52.032 fused_ordering(822) 00:17:52.032 fused_ordering(823) 00:17:52.032 fused_ordering(824) 00:17:52.032 fused_ordering(825) 00:17:52.032 fused_ordering(826) 00:17:52.032 fused_ordering(827) 00:17:52.032 fused_ordering(828) 00:17:52.032 fused_ordering(829) 00:17:52.032 fused_ordering(830) 00:17:52.032 fused_ordering(831) 00:17:52.032 fused_ordering(832) 00:17:52.032 fused_ordering(833) 00:17:52.032 fused_ordering(834) 00:17:52.032 fused_ordering(835) 00:17:52.032 fused_ordering(836) 00:17:52.032 fused_ordering(837) 00:17:52.032 fused_ordering(838) 00:17:52.032 fused_ordering(839) 00:17:52.032 fused_ordering(840) 00:17:52.032 fused_ordering(841) 00:17:52.032 fused_ordering(842) 00:17:52.032 fused_ordering(843) 00:17:52.032 fused_ordering(844) 00:17:52.032 fused_ordering(845) 00:17:52.032 fused_ordering(846) 00:17:52.032 fused_ordering(847) 00:17:52.032 fused_ordering(848) 00:17:52.032 fused_ordering(849) 00:17:52.032 fused_ordering(850) 00:17:52.032 fused_ordering(851) 00:17:52.032 fused_ordering(852) 00:17:52.032 fused_ordering(853) 00:17:52.032 fused_ordering(854) 00:17:52.032 fused_ordering(855) 00:17:52.032 fused_ordering(856) 00:17:52.032 fused_ordering(857) 00:17:52.032 fused_ordering(858) 00:17:52.032 fused_ordering(859) 00:17:52.032 fused_ordering(860) 00:17:52.032 fused_ordering(861) 00:17:52.032 fused_ordering(862) 00:17:52.032 fused_ordering(863) 00:17:52.032 fused_ordering(864) 00:17:52.032 fused_ordering(865) 00:17:52.032 fused_ordering(866) 00:17:52.032 fused_ordering(867) 00:17:52.032 fused_ordering(868) 00:17:52.032 fused_ordering(869) 00:17:52.032 fused_ordering(870) 00:17:52.032 fused_ordering(871) 00:17:52.032 fused_ordering(872) 00:17:52.032 fused_ordering(873) 00:17:52.032 fused_ordering(874) 00:17:52.032 fused_ordering(875) 00:17:52.032 fused_ordering(876) 00:17:52.032 fused_ordering(877) 00:17:52.032 fused_ordering(878) 00:17:52.032 fused_ordering(879) 00:17:52.032 fused_ordering(880) 00:17:52.032 fused_ordering(881) 00:17:52.033 fused_ordering(882) 00:17:52.033 fused_ordering(883) 00:17:52.033 fused_ordering(884) 00:17:52.033 fused_ordering(885) 00:17:52.033 fused_ordering(886) 00:17:52.033 fused_ordering(887) 00:17:52.033 fused_ordering(888) 00:17:52.033 fused_ordering(889) 00:17:52.033 fused_ordering(890) 00:17:52.033 fused_ordering(891) 00:17:52.033 fused_ordering(892) 00:17:52.033 fused_ordering(893) 00:17:52.033 fused_ordering(894) 00:17:52.033 fused_ordering(895) 00:17:52.033 fused_ordering(896) 00:17:52.033 fused_ordering(897) 00:17:52.033 fused_ordering(898) 00:17:52.033 fused_ordering(899) 00:17:52.033 fused_ordering(900) 00:17:52.033 fused_ordering(901) 00:17:52.033 fused_ordering(902) 00:17:52.033 fused_ordering(903) 00:17:52.033 fused_ordering(904) 00:17:52.033 fused_ordering(905) 00:17:52.033 fused_ordering(906) 00:17:52.033 fused_ordering(907) 00:17:52.033 fused_ordering(908) 00:17:52.033 fused_ordering(909) 00:17:52.033 fused_ordering(910) 00:17:52.033 fused_ordering(911) 00:17:52.033 fused_ordering(912) 00:17:52.033 fused_ordering(913) 00:17:52.033 fused_ordering(914) 00:17:52.033 fused_ordering(915) 00:17:52.033 fused_ordering(916) 00:17:52.033 fused_ordering(917) 00:17:52.033 fused_ordering(918) 00:17:52.033 fused_ordering(919) 00:17:52.033 fused_ordering(920) 00:17:52.033 fused_ordering(921) 00:17:52.033 fused_ordering(922) 00:17:52.033 fused_ordering(923) 00:17:52.033 fused_ordering(924) 00:17:52.033 fused_ordering(925) 00:17:52.033 fused_ordering(926) 00:17:52.033 fused_ordering(927) 00:17:52.033 fused_ordering(928) 00:17:52.033 fused_ordering(929) 00:17:52.033 fused_ordering(930) 00:17:52.033 fused_ordering(931) 00:17:52.033 fused_ordering(932) 00:17:52.033 fused_ordering(933) 00:17:52.033 fused_ordering(934) 00:17:52.033 fused_ordering(935) 00:17:52.033 fused_ordering(936) 00:17:52.033 fused_ordering(937) 00:17:52.033 fused_ordering(938) 00:17:52.033 fused_ordering(939) 00:17:52.033 fused_ordering(940) 00:17:52.033 fused_ordering(941) 00:17:52.033 fused_ordering(942) 00:17:52.033 fused_ordering(943) 00:17:52.033 fused_ordering(944) 00:17:52.033 fused_ordering(945) 00:17:52.033 fused_ordering(946) 00:17:52.033 fused_ordering(947) 00:17:52.033 fused_ordering(948) 00:17:52.033 fused_ordering(949) 00:17:52.033 fused_ordering(950) 00:17:52.033 fused_ordering(951) 00:17:52.033 fused_ordering(952) 00:17:52.033 fused_ordering(953) 00:17:52.033 fused_ordering(954) 00:17:52.033 fused_ordering(955) 00:17:52.033 fused_ordering(956) 00:17:52.033 fused_ordering(957) 00:17:52.033 fused_ordering(958) 00:17:52.033 fused_ordering(959) 00:17:52.033 fused_ordering(960) 00:17:52.033 fused_ordering(961) 00:17:52.033 fused_ordering(962) 00:17:52.033 fused_ordering(963) 00:17:52.033 fused_ordering(964) 00:17:52.033 fused_ordering(965) 00:17:52.033 fused_ordering(966) 00:17:52.033 fused_ordering(967) 00:17:52.033 fused_ordering(968) 00:17:52.033 fused_ordering(969) 00:17:52.033 fused_ordering(970) 00:17:52.033 fused_ordering(971) 00:17:52.033 fused_ordering(972) 00:17:52.033 fused_ordering(973) 00:17:52.033 fused_ordering(974) 00:17:52.033 fused_ordering(975) 00:17:52.033 fused_ordering(976) 00:17:52.033 fused_ordering(977) 00:17:52.033 fused_ordering(978) 00:17:52.033 fused_ordering(979) 00:17:52.033 fused_ordering(980) 00:17:52.033 fused_ordering(981) 00:17:52.033 fused_ordering(982) 00:17:52.033 fused_ordering(983) 00:17:52.033 fused_ordering(984) 00:17:52.033 fused_ordering(985) 00:17:52.033 fused_ordering(986) 00:17:52.033 fused_ordering(987) 00:17:52.033 fused_ordering(988) 00:17:52.033 fused_ordering(989) 00:17:52.033 fused_ordering(990) 00:17:52.033 fused_ordering(991) 00:17:52.033 fused_ordering(992) 00:17:52.033 fused_ordering(993) 00:17:52.033 fused_ordering(994) 00:17:52.033 fused_ordering(995) 00:17:52.033 fused_ordering(996) 00:17:52.033 fused_ordering(997) 00:17:52.033 fused_ordering(998) 00:17:52.033 fused_ordering(999) 00:17:52.033 fused_ordering(1000) 00:17:52.033 fused_ordering(1001) 00:17:52.033 fused_ordering(1002) 00:17:52.033 fused_ordering(1003) 00:17:52.033 fused_ordering(1004) 00:17:52.033 fused_ordering(1005) 00:17:52.033 fused_ordering(1006) 00:17:52.033 fused_ordering(1007) 00:17:52.033 fused_ordering(1008) 00:17:52.033 fused_ordering(1009) 00:17:52.033 fused_ordering(1010) 00:17:52.033 fused_ordering(1011) 00:17:52.033 fused_ordering(1012) 00:17:52.033 fused_ordering(1013) 00:17:52.033 fused_ordering(1014) 00:17:52.033 fused_ordering(1015) 00:17:52.033 fused_ordering(1016) 00:17:52.033 fused_ordering(1017) 00:17:52.033 fused_ordering(1018) 00:17:52.033 fused_ordering(1019) 00:17:52.033 fused_ordering(1020) 00:17:52.033 fused_ordering(1021) 00:17:52.033 fused_ordering(1022) 00:17:52.033 fused_ordering(1023) 00:17:52.033 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:52.033 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:52.033 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:52.033 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:17:52.033 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:52.033 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:17:52.033 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:52.033 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:52.033 rmmod nvme_tcp 00:17:52.033 rmmod nvme_fabrics 00:17:52.033 rmmod nvme_keyring 00:17:52.033 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:52.033 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:17:52.033 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:17:52.033 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 137867 ']' 00:17:52.033 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 137867 00:17:52.033 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 137867 ']' 00:17:52.033 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 137867 00:17:52.033 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:17:52.033 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:52.033 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 137867 00:17:52.033 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:52.033 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:52.033 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 137867' 00:17:52.033 killing process with pid 137867 00:17:52.033 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 137867 00:17:52.033 06:10:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 137867 00:17:53.411 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:17:53.411 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:53.411 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:53.411 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:53.411 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:53.411 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:53.411 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.411 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:53.411 06:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.338 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:55.338 00:17:55.338 real 0m10.384s 00:17:55.338 user 0m7.989s 00:17:55.338 sys 0m4.036s 00:17:55.338 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:55.338 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:55.339 ************************************ 00:17:55.339 END TEST nvmf_fused_ordering 00:17:55.339 ************************************ 00:17:55.339 06:10:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:55.339 06:10:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:55.339 06:10:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:55.339 06:10:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:55.339 ************************************ 00:17:55.339 START TEST nvmf_ns_masking 00:17:55.339 ************************************ 00:17:55.339 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:55.596 * Looking for test storage... 00:17:55.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=2f443310-c195-49ab-b62e-1c97130d502b 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=4f08c918-b4b1-445f-b034-8afbc0e16d87 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=902ee89d-5394-45b5-9556-33d726a94fee 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:17:55.596 06:10:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:57.500 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:57.501 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:57.501 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:57.501 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:57.501 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:57.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:17:57.501 00:17:57.501 --- 10.0.0.2 ping statistics --- 00:17:57.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.501 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:57.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:17:57.501 00:17:57.501 --- 10.0.0.1 ping statistics --- 00:17:57.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.501 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=140481 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 140481 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 140481 ']' 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:57.501 06:10:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:57.761 [2024-07-26 06:10:08.861222] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:57.761 [2024-07-26 06:10:08.861364] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.761 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.761 [2024-07-26 06:10:09.004043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.021 [2024-07-26 06:10:09.264968] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.021 [2024-07-26 06:10:09.265054] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.021 [2024-07-26 06:10:09.265095] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.021 [2024-07-26 06:10:09.265120] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.021 [2024-07-26 06:10:09.265143] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.021 [2024-07-26 06:10:09.265199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.588 06:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:58.588 06:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:58.588 06:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:58.588 06:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:58.588 06:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:58.588 06:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.588 06:10:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:58.847 [2024-07-26 06:10:10.012460] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.847 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:58.847 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:58.847 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:59.104 Malloc1 00:17:59.104 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:59.671 Malloc2 00:17:59.671 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:59.671 06:10:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:59.928 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:00.187 [2024-07-26 06:10:11.465481] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:00.187 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:00.187 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 902ee89d-5394-45b5-9556-33d726a94fee -a 10.0.0.2 -s 4420 -i 4 00:18:00.445 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:00.445 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:00.445 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:00.445 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:00.445 06:10:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:02.351 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:02.351 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:02.351 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:02.351 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:02.351 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:02.351 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:02.351 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:02.351 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:02.351 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:02.351 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:02.351 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:02.351 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:02.351 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:02.351 [ 0]:0x1 00:18:02.351 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:02.351 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:02.610 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3088aedb3d704e34accc530d35b73a24 00:18:02.610 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3088aedb3d704e34accc530d35b73a24 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:02.610 06:10:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:02.869 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:02.869 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:02.869 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:02.869 [ 0]:0x1 00:18:02.869 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:02.869 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:02.869 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3088aedb3d704e34accc530d35b73a24 00:18:02.869 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3088aedb3d704e34accc530d35b73a24 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:02.869 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:02.869 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:02.869 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:02.869 [ 1]:0x2 00:18:02.869 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:02.869 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:02.869 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6cf5e2aa95c847759d79d9ea7011f50f 00:18:02.869 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6cf5e2aa95c847759d79d9ea7011f50f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:02.869 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:02.869 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:02.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:02.869 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:03.439 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:03.439 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:03.439 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 902ee89d-5394-45b5-9556-33d726a94fee -a 10.0.0.2 -s 4420 -i 4 00:18:03.700 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:03.700 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:03.700 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:03.700 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:18:03.700 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:18:03.700 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:06.235 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:06.235 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:06.235 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:06.235 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:06.235 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:06.235 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:06.235 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:06.235 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:06.235 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:06.235 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:06.235 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:06.235 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:06.235 06:10:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:06.235 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:06.235 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:06.235 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:06.235 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:06.235 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:06.235 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:06.236 [ 0]:0x2 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6cf5e2aa95c847759d79d9ea7011f50f 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6cf5e2aa95c847759d79d9ea7011f50f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:06.236 [ 0]:0x1 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3088aedb3d704e34accc530d35b73a24 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3088aedb3d704e34accc530d35b73a24 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:06.236 [ 1]:0x2 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6cf5e2aa95c847759d79d9ea7011f50f 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6cf5e2aa95c847759d79d9ea7011f50f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:06.236 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:06.494 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:06.494 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:06.494 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:06.494 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:06.494 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:06.494 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:06.494 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:06.494 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:06.494 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:06.494 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:06.752 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:06.752 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:06.752 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:06.752 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:06.752 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:06.752 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:06.752 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:06.752 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:06.752 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:06.752 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:06.752 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:06.752 [ 0]:0x2 00:18:06.752 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:06.752 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:06.752 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6cf5e2aa95c847759d79d9ea7011f50f 00:18:06.752 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6cf5e2aa95c847759d79d9ea7011f50f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:06.752 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:06.752 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:06.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:06.752 06:10:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:07.011 06:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:07.011 06:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 902ee89d-5394-45b5-9556-33d726a94fee -a 10.0.0.2 -s 4420 -i 4 00:18:07.269 06:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:07.269 06:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:07.269 06:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:07.269 06:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:07.269 06:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:07.269 06:10:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:09.173 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:09.173 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:09.173 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:09.173 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:09.173 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:09.173 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:09.173 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:09.173 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:09.173 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:09.173 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:09.173 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:09.173 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:09.173 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:09.173 [ 0]:0x1 00:18:09.173 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:09.173 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:09.173 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3088aedb3d704e34accc530d35b73a24 00:18:09.173 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3088aedb3d704e34accc530d35b73a24 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:09.173 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:09.173 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:09.173 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:09.173 [ 1]:0x2 00:18:09.173 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:09.173 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:09.434 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6cf5e2aa95c847759d79d9ea7011f50f 00:18:09.434 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6cf5e2aa95c847759d79d9ea7011f50f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:09.434 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:09.706 [ 0]:0x2 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6cf5e2aa95c847759d79d9ea7011f50f 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6cf5e2aa95c847759d79d9ea7011f50f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:09.706 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:09.964 [2024-07-26 06:10:21.173510] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:09.964 request: 00:18:09.964 { 00:18:09.964 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.964 "nsid": 2, 00:18:09.964 "host": "nqn.2016-06.io.spdk:host1", 00:18:09.964 "method": "nvmf_ns_remove_host", 00:18:09.964 "req_id": 1 00:18:09.964 } 00:18:09.964 Got JSON-RPC error response 00:18:09.964 response: 00:18:09.964 { 00:18:09.964 "code": -32602, 00:18:09.964 "message": "Invalid parameters" 00:18:09.964 } 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:09.964 [ 0]:0x2 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6cf5e2aa95c847759d79d9ea7011f50f 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6cf5e2aa95c847759d79d9ea7011f50f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:09.964 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:10.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:10.222 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=142096 00:18:10.222 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:10.222 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.222 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 142096 /var/tmp/host.sock 00:18:10.222 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 142096 ']' 00:18:10.222 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:10.222 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:10.222 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:10.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:10.222 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:10.222 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 [2024-07-26 06:10:21.422241] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:10.222 [2024-07-26 06:10:21.422390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142096 ] 00:18:10.222 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.222 [2024-07-26 06:10:21.544652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.479 [2024-07-26 06:10:21.783334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.416 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:11.416 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:11.416 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:11.674 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:11.932 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 2f443310-c195-49ab-b62e-1c97130d502b 00:18:11.932 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:18:11.932 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2F443310C19549ABB62E1C97130D502B -i 00:18:12.190 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 4f08c918-b4b1-445f-b034-8afbc0e16d87 00:18:12.190 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:18:12.190 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 4F08C918B4B1445FB0348AFBC0E16D87 -i 00:18:12.447 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:12.704 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:13.269 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:13.269 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:13.527 nvme0n1 00:18:13.527 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:13.527 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:13.786 nvme1n2 00:18:14.045 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:14.045 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:14.045 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:14.045 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:14.045 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:14.304 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:14.304 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:14.304 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:14.304 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:14.304 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 2f443310-c195-49ab-b62e-1c97130d502b == \2\f\4\4\3\3\1\0\-\c\1\9\5\-\4\9\a\b\-\b\6\2\e\-\1\c\9\7\1\3\0\d\5\0\2\b ]] 00:18:14.304 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:14.304 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:14.304 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:14.562 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 4f08c918-b4b1-445f-b034-8afbc0e16d87 == \4\f\0\8\c\9\1\8\-\b\4\b\1\-\4\4\5\f\-\b\0\3\4\-\8\a\f\b\c\0\e\1\6\d\8\7 ]] 00:18:14.562 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 142096 00:18:14.562 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 142096 ']' 00:18:14.562 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 142096 00:18:14.562 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:14.562 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:14.562 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 142096 00:18:14.821 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:14.821 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:14.821 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 142096' 00:18:14.821 killing process with pid 142096 00:18:14.821 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 142096 00:18:14.821 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 142096 00:18:16.724 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:18:16.981 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:17.239 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:18:17.239 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:18:17.240 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:17.240 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:18:17.240 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:17.240 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:18:17.240 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:17.240 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:17.240 rmmod nvme_tcp 00:18:17.240 rmmod nvme_fabrics 00:18:17.240 rmmod nvme_keyring 00:18:17.240 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:17.240 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:18:17.240 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:18:17.240 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 140481 ']' 00:18:17.240 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 140481 00:18:17.240 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 140481 ']' 00:18:17.240 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 140481 00:18:17.240 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:17.240 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:17.240 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 140481 00:18:17.240 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:17.240 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:17.240 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 140481' 00:18:17.240 killing process with pid 140481 00:18:17.240 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 140481 00:18:17.240 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 140481 00:18:19.149 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:18:19.149 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:19.149 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:19.149 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:19.149 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:19.149 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:19.149 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.149 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:19.149 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.053 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:21.053 00:18:21.053 real 0m25.528s 00:18:21.053 user 0m34.813s 00:18:21.053 sys 0m4.427s 00:18:21.053 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:21.053 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:21.053 ************************************ 00:18:21.053 END TEST nvmf_ns_masking 00:18:21.053 ************************************ 00:18:21.053 06:10:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:21.053 06:10:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:21.053 06:10:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:21.053 06:10:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:21.053 06:10:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:21.053 ************************************ 00:18:21.053 START TEST nvmf_nvme_cli 00:18:21.053 ************************************ 00:18:21.053 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:21.053 * Looking for test storage... 00:18:21.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:21.053 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:21.053 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:21.053 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:21.053 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:21.053 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:21.053 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:21.053 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:21.053 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:21.053 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:18:21.054 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:22.955 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:22.955 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:22.955 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:23.213 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:23.213 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:23.213 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:23.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:23.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:18:23.214 00:18:23.214 --- 10.0.0.2 ping statistics --- 00:18:23.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.214 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:23.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:23.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:18:23.214 00:18:23.214 --- 10.0.0.1 ping statistics --- 00:18:23.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.214 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=145102 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 145102 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 145102 ']' 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:23.214 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:23.214 [2024-07-26 06:10:34.534768] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:23.214 [2024-07-26 06:10:34.534915] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.471 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.471 [2024-07-26 06:10:34.678792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:23.730 [2024-07-26 06:10:34.944888] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.730 [2024-07-26 06:10:34.944976] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.730 [2024-07-26 06:10:34.945005] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:23.730 [2024-07-26 06:10:34.945028] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:23.730 [2024-07-26 06:10:34.945052] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.730 [2024-07-26 06:10:34.945207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.730 [2024-07-26 06:10:34.945264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:23.731 [2024-07-26 06:10:34.945314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.731 [2024-07-26 06:10:34.945330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:24.298 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:24.298 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:18:24.298 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:24.298 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:24.298 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:24.298 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.298 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:24.298 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.298 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:24.298 [2024-07-26 06:10:35.572207] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:24.298 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.298 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:24.298 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.298 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:24.558 Malloc0 00:18:24.558 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.558 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:24.558 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.558 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:24.558 Malloc1 00:18:24.558 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.558 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:24.558 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.558 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:24.558 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.558 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:24.558 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.558 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:24.558 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.558 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:24.558 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.558 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:24.558 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.558 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:24.558 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.558 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:24.558 [2024-07-26 06:10:35.763142] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.558 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.558 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:24.558 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.558 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:24.558 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.558 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:18:24.816 00:18:24.816 Discovery Log Number of Records 2, Generation counter 2 00:18:24.816 =====Discovery Log Entry 0====== 00:18:24.816 trtype: tcp 00:18:24.816 adrfam: ipv4 00:18:24.816 subtype: current discovery subsystem 00:18:24.816 treq: not required 00:18:24.816 portid: 0 00:18:24.816 trsvcid: 4420 00:18:24.816 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:24.816 traddr: 10.0.0.2 00:18:24.816 eflags: explicit discovery connections, duplicate discovery information 00:18:24.816 sectype: none 00:18:24.816 =====Discovery Log Entry 1====== 00:18:24.816 trtype: tcp 00:18:24.816 adrfam: ipv4 00:18:24.816 subtype: nvme subsystem 00:18:24.816 treq: not required 00:18:24.816 portid: 0 00:18:24.816 trsvcid: 4420 00:18:24.816 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:24.816 traddr: 10.0.0.2 00:18:24.816 eflags: none 00:18:24.816 sectype: none 00:18:24.816 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:24.816 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:24.816 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:18:24.816 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:24.816 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:18:24.816 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:18:24.816 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:24.816 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:18:24.816 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:24.816 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:24.816 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:25.382 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:25.382 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:18:25.382 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:25.382 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:25.382 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:25.382 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:18:27.916 /dev/nvme0n1 ]] 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:27.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:18:27.916 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:27.917 rmmod nvme_tcp 00:18:27.917 rmmod nvme_fabrics 00:18:27.917 rmmod nvme_keyring 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 145102 ']' 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 145102 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 145102 ']' 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 145102 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 145102 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 145102' 00:18:27.917 killing process with pid 145102 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 145102 00:18:27.917 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 145102 00:18:29.329 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:18:29.329 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:29.329 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:29.329 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:29.329 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:29.329 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:29.330 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.330 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:29.330 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:31.868 00:18:31.868 real 0m10.357s 00:18:31.868 user 0m21.421s 00:18:31.868 sys 0m2.422s 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:31.868 ************************************ 00:18:31.868 END TEST nvmf_nvme_cli 00:18:31.868 ************************************ 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:31.868 ************************************ 00:18:31.868 START TEST nvmf_auth_target 00:18:31.868 ************************************ 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:31.868 * Looking for test storage... 00:18:31.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.868 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:31.869 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:31.869 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:31.869 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.869 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:31.869 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.869 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:31.869 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:31.869 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:31.869 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.770 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:33.770 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:33.770 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:33.770 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:33.770 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:33.770 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:33.770 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:33.770 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:33.770 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:33.770 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:33.770 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:33.770 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:33.770 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:33.770 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:33.770 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:33.770 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:33.770 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:33.770 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:33.770 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:33.770 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:33.770 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:33.771 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:33.771 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:33.771 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:33.771 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:33.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:33.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:18:33.771 00:18:33.771 --- 10.0.0.2 ping statistics --- 00:18:33.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.771 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:33.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:33.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:18:33.771 00:18:33.771 --- 10.0.0.1 ping statistics --- 00:18:33.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.771 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=147738 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 147738 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 147738 ']' 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:33.771 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.708 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:34.708 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:34.708 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:34.708 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:34.708 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.708 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.708 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=147886 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ff9e048644131d271c5e86c762cc9af4cea503970d3c66a8 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.voQ 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ff9e048644131d271c5e86c762cc9af4cea503970d3c66a8 0 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ff9e048644131d271c5e86c762cc9af4cea503970d3c66a8 0 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ff9e048644131d271c5e86c762cc9af4cea503970d3c66a8 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.voQ 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.voQ 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.voQ 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=38a61161d50bc0dd7f72ce6028613b24ea714f71dce595a92506015a169be785 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Nzx 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 38a61161d50bc0dd7f72ce6028613b24ea714f71dce595a92506015a169be785 3 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 38a61161d50bc0dd7f72ce6028613b24ea714f71dce595a92506015a169be785 3 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=38a61161d50bc0dd7f72ce6028613b24ea714f71dce595a92506015a169be785 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Nzx 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Nzx 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.Nzx 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=458541e6bd71b88b289e213abd74b13c 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.c9G 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 458541e6bd71b88b289e213abd74b13c 1 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 458541e6bd71b88b289e213abd74b13c 1 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=458541e6bd71b88b289e213abd74b13c 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.c9G 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.c9G 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.c9G 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9f9f62b81ae50c331a11d3a00ae2cbeb18d71cc338d9f465 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Pnu 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9f9f62b81ae50c331a11d3a00ae2cbeb18d71cc338d9f465 2 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9f9f62b81ae50c331a11d3a00ae2cbeb18d71cc338d9f465 2 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9f9f62b81ae50c331a11d3a00ae2cbeb18d71cc338d9f465 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:34.709 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:34.709 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Pnu 00:18:34.709 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Pnu 00:18:34.709 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Pnu 00:18:34.709 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:34.709 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:34.709 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:34.709 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:34.709 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:34.709 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:34.709 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:34.709 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=349e07ac211b0fc7eacdd1b4e9e9a4a2067d541e327dccfa 00:18:34.709 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:34.709 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ar3 00:18:34.709 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 349e07ac211b0fc7eacdd1b4e9e9a4a2067d541e327dccfa 2 00:18:34.709 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 349e07ac211b0fc7eacdd1b4e9e9a4a2067d541e327dccfa 2 00:18:34.709 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:34.709 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:34.709 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=349e07ac211b0fc7eacdd1b4e9e9a4a2067d541e327dccfa 00:18:34.709 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:34.709 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ar3 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ar3 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.ar3 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a5728cb45fd5cc361b7329223d6bbb6b 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.n5w 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a5728cb45fd5cc361b7329223d6bbb6b 1 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a5728cb45fd5cc361b7329223d6bbb6b 1 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a5728cb45fd5cc361b7329223d6bbb6b 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.n5w 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.n5w 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.n5w 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b75a300441f780821556dbc8e3f6f96074cfb8ac488787ea068cd70d4337b797 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.JPj 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b75a300441f780821556dbc8e3f6f96074cfb8ac488787ea068cd70d4337b797 3 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b75a300441f780821556dbc8e3f6f96074cfb8ac488787ea068cd70d4337b797 3 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b75a300441f780821556dbc8e3f6f96074cfb8ac488787ea068cd70d4337b797 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.JPj 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.JPj 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.JPj 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 147738 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 147738 ']' 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:34.968 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.227 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:35.227 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:35.227 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 147886 /var/tmp/host.sock 00:18:35.227 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 147886 ']' 00:18:35.227 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:35.227 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:35.227 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:35.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:35.227 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:35.227 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.164 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:36.164 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:36.164 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:36.164 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.164 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.164 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.164 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:36.164 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.voQ 00:18:36.164 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.164 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.164 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.164 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.voQ 00:18:36.164 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.voQ 00:18:36.164 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.Nzx ]] 00:18:36.164 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Nzx 00:18:36.164 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.164 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.423 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.423 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Nzx 00:18:36.423 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Nzx 00:18:36.423 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:36.423 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.c9G 00:18:36.423 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.423 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.423 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.423 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.c9G 00:18:36.423 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.c9G 00:18:36.683 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Pnu ]] 00:18:36.683 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Pnu 00:18:36.683 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.683 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.683 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.683 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Pnu 00:18:36.683 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Pnu 00:18:36.941 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:36.941 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ar3 00:18:36.941 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.941 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.941 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.941 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ar3 00:18:36.941 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ar3 00:18:37.199 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.n5w ]] 00:18:37.200 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.n5w 00:18:37.200 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.200 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.200 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.200 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.n5w 00:18:37.200 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.n5w 00:18:37.458 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:37.458 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.JPj 00:18:37.458 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.458 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.458 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.458 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.JPj 00:18:37.458 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.JPj 00:18:37.715 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:37.715 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:37.715 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:37.715 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.716 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:37.716 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:37.973 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:37.973 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.973 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:37.973 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:37.973 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:37.973 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.973 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.973 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.973 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.973 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.974 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.974 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.539 00:18:38.539 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.539 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.539 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.539 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.539 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.539 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.539 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.539 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.539 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.539 { 00:18:38.539 "cntlid": 1, 00:18:38.539 "qid": 0, 00:18:38.539 "state": "enabled", 00:18:38.539 "thread": "nvmf_tgt_poll_group_000", 00:18:38.539 "listen_address": { 00:18:38.539 "trtype": "TCP", 00:18:38.539 "adrfam": "IPv4", 00:18:38.539 "traddr": "10.0.0.2", 00:18:38.539 "trsvcid": "4420" 00:18:38.539 }, 00:18:38.539 "peer_address": { 00:18:38.539 "trtype": "TCP", 00:18:38.539 "adrfam": "IPv4", 00:18:38.539 "traddr": "10.0.0.1", 00:18:38.539 "trsvcid": "44312" 00:18:38.539 }, 00:18:38.539 "auth": { 00:18:38.539 "state": "completed", 00:18:38.539 "digest": "sha256", 00:18:38.539 "dhgroup": "null" 00:18:38.539 } 00:18:38.539 } 00:18:38.539 ]' 00:18:38.539 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.798 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:38.798 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.798 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:38.798 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.798 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.798 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.798 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.056 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmY5ZTA0ODY0NDEzMWQyNzFjNWU4NmM3NjJjYzlhZjRjZWE1MDM5NzBkM2M2NmE4NwRzgw==: --dhchap-ctrl-secret DHHC-1:03:MzhhNjExNjFkNTBiYzBkZDdmNzJjZTYwMjg2MTNiMjRlYTcxNGY3MWRjZTU5NWE5MjUwNjAxNWExNjliZTc4NYPihnI=: 00:18:39.991 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.991 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:39.991 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.991 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.991 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.991 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.991 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:39.991 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:40.248 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:40.248 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.248 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:40.248 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:40.248 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:40.248 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.248 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.248 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.249 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.249 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.249 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.249 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.505 00:18:40.505 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.505 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.505 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.763 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.763 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.763 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.763 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.763 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.763 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.763 { 00:18:40.763 "cntlid": 3, 00:18:40.763 "qid": 0, 00:18:40.763 "state": "enabled", 00:18:40.763 "thread": "nvmf_tgt_poll_group_000", 00:18:40.763 "listen_address": { 00:18:40.763 "trtype": "TCP", 00:18:40.763 "adrfam": "IPv4", 00:18:40.763 "traddr": "10.0.0.2", 00:18:40.763 "trsvcid": "4420" 00:18:40.763 }, 00:18:40.763 "peer_address": { 00:18:40.763 "trtype": "TCP", 00:18:40.763 "adrfam": "IPv4", 00:18:40.763 "traddr": "10.0.0.1", 00:18:40.763 "trsvcid": "44342" 00:18:40.763 }, 00:18:40.763 "auth": { 00:18:40.763 "state": "completed", 00:18:40.763 "digest": "sha256", 00:18:40.763 "dhgroup": "null" 00:18:40.763 } 00:18:40.763 } 00:18:40.763 ]' 00:18:40.763 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.763 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:40.763 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.022 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:41.022 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.022 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.022 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.022 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.279 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDU4NTQxZTZiZDcxYjg4YjI4OWUyMTNhYmQ3NGIxM2OP7Tpp: --dhchap-ctrl-secret DHHC-1:02:OWY5ZjYyYjgxYWU1MGMzMzFhMTFkM2EwMGFlMmNiZWIxOGQ3MWNjMzM4ZDlmNDY1BHm2Aw==: 00:18:42.215 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.215 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:42.215 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.215 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.215 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.215 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.215 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:42.215 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:42.472 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:42.472 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.472 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:42.472 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:42.473 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:42.473 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.473 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.473 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.473 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.473 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.473 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.473 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.730 00:18:42.730 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.730 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.730 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.987 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.987 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.987 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.987 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.987 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.987 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.987 { 00:18:42.987 "cntlid": 5, 00:18:42.987 "qid": 0, 00:18:42.987 "state": "enabled", 00:18:42.987 "thread": "nvmf_tgt_poll_group_000", 00:18:42.987 "listen_address": { 00:18:42.987 "trtype": "TCP", 00:18:42.987 "adrfam": "IPv4", 00:18:42.987 "traddr": "10.0.0.2", 00:18:42.987 "trsvcid": "4420" 00:18:42.987 }, 00:18:42.987 "peer_address": { 00:18:42.987 "trtype": "TCP", 00:18:42.987 "adrfam": "IPv4", 00:18:42.987 "traddr": "10.0.0.1", 00:18:42.987 "trsvcid": "44376" 00:18:42.987 }, 00:18:42.987 "auth": { 00:18:42.987 "state": "completed", 00:18:42.987 "digest": "sha256", 00:18:42.987 "dhgroup": "null" 00:18:42.987 } 00:18:42.987 } 00:18:42.987 ]' 00:18:42.987 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.987 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:42.987 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.987 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:42.987 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.987 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.987 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.987 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.245 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzQ5ZTA3YWMyMTFiMGZjN2VhY2RkMWI0ZTllOWE0YTIwNjdkNTQxZTMyN2RjY2ZhPE/SUA==: --dhchap-ctrl-secret DHHC-1:01:YTU3MjhjYjQ1ZmQ1Y2MzNjFiNzMyOTIyM2Q2YmJiNmJ2uRg5: 00:18:44.199 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.199 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:44.199 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.199 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.471 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.471 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.471 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:44.471 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:44.471 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:44.471 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.471 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:44.471 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:44.471 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:44.471 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.471 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:44.471 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.471 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.471 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.471 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:44.471 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.036 00:18:45.036 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.036 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.036 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.036 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.036 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.036 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.036 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.036 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.036 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.036 { 00:18:45.036 "cntlid": 7, 00:18:45.036 "qid": 0, 00:18:45.036 "state": "enabled", 00:18:45.036 "thread": "nvmf_tgt_poll_group_000", 00:18:45.036 "listen_address": { 00:18:45.036 "trtype": "TCP", 00:18:45.036 "adrfam": "IPv4", 00:18:45.036 "traddr": "10.0.0.2", 00:18:45.036 "trsvcid": "4420" 00:18:45.036 }, 00:18:45.036 "peer_address": { 00:18:45.036 "trtype": "TCP", 00:18:45.036 "adrfam": "IPv4", 00:18:45.036 "traddr": "10.0.0.1", 00:18:45.036 "trsvcid": "44398" 00:18:45.036 }, 00:18:45.036 "auth": { 00:18:45.036 "state": "completed", 00:18:45.036 "digest": "sha256", 00:18:45.036 "dhgroup": "null" 00:18:45.036 } 00:18:45.036 } 00:18:45.036 ]' 00:18:45.036 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.294 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.294 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.294 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:45.294 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.294 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.294 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.294 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.552 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yjc1YTMwMDQ0MWY3ODA4MjE1NTZkYmM4ZTNmNmY5NjA3NGNmYjhhYzQ4ODc4N2VhMDY4Y2Q3MGQ0MzM3Yjc5N7wWLoc=: 00:18:46.487 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.488 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:46.488 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.488 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.488 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.488 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.488 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.488 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:46.488 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:46.746 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:46.746 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.746 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:46.746 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:46.746 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:46.746 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.746 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.746 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.746 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.746 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.746 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.746 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.004 00:18:47.004 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.004 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.004 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.261 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.261 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.261 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.261 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.261 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.261 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.261 { 00:18:47.261 "cntlid": 9, 00:18:47.261 "qid": 0, 00:18:47.261 "state": "enabled", 00:18:47.261 "thread": "nvmf_tgt_poll_group_000", 00:18:47.261 "listen_address": { 00:18:47.261 "trtype": "TCP", 00:18:47.261 "adrfam": "IPv4", 00:18:47.261 "traddr": "10.0.0.2", 00:18:47.262 "trsvcid": "4420" 00:18:47.262 }, 00:18:47.262 "peer_address": { 00:18:47.262 "trtype": "TCP", 00:18:47.262 "adrfam": "IPv4", 00:18:47.262 "traddr": "10.0.0.1", 00:18:47.262 "trsvcid": "44420" 00:18:47.262 }, 00:18:47.262 "auth": { 00:18:47.262 "state": "completed", 00:18:47.262 "digest": "sha256", 00:18:47.262 "dhgroup": "ffdhe2048" 00:18:47.262 } 00:18:47.262 } 00:18:47.262 ]' 00:18:47.262 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.262 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:47.262 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.520 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:47.520 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.520 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.520 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.520 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.778 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmY5ZTA0ODY0NDEzMWQyNzFjNWU4NmM3NjJjYzlhZjRjZWE1MDM5NzBkM2M2NmE4NwRzgw==: --dhchap-ctrl-secret DHHC-1:03:MzhhNjExNjFkNTBiYzBkZDdmNzJjZTYwMjg2MTNiMjRlYTcxNGY3MWRjZTU5NWE5MjUwNjAxNWExNjliZTc4NYPihnI=: 00:18:48.715 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.715 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:48.715 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.715 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.715 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.715 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.715 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:48.715 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:48.973 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:48.973 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.973 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:48.973 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:48.973 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:48.973 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.973 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.973 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.973 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.973 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.973 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.973 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.231 00:18:49.231 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.231 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.231 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.489 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.489 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.489 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.489 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.489 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.489 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.489 { 00:18:49.489 "cntlid": 11, 00:18:49.489 "qid": 0, 00:18:49.489 "state": "enabled", 00:18:49.489 "thread": "nvmf_tgt_poll_group_000", 00:18:49.489 "listen_address": { 00:18:49.489 "trtype": "TCP", 00:18:49.489 "adrfam": "IPv4", 00:18:49.489 "traddr": "10.0.0.2", 00:18:49.489 "trsvcid": "4420" 00:18:49.489 }, 00:18:49.489 "peer_address": { 00:18:49.489 "trtype": "TCP", 00:18:49.489 "adrfam": "IPv4", 00:18:49.489 "traddr": "10.0.0.1", 00:18:49.489 "trsvcid": "54570" 00:18:49.489 }, 00:18:49.489 "auth": { 00:18:49.489 "state": "completed", 00:18:49.489 "digest": "sha256", 00:18:49.489 "dhgroup": "ffdhe2048" 00:18:49.489 } 00:18:49.489 } 00:18:49.489 ]' 00:18:49.489 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.489 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:49.489 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.489 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:49.489 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.747 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.747 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.747 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.747 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDU4NTQxZTZiZDcxYjg4YjI4OWUyMTNhYmQ3NGIxM2OP7Tpp: --dhchap-ctrl-secret DHHC-1:02:OWY5ZjYyYjgxYWU1MGMzMzFhMTFkM2EwMGFlMmNiZWIxOGQ3MWNjMzM4ZDlmNDY1BHm2Aw==: 00:18:50.682 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.682 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:50.682 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.682 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.682 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.682 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.682 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:50.682 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:50.940 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:50.940 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.940 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:50.940 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:50.940 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:50.940 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.940 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.940 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.940 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.940 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.940 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.940 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.507 00:18:51.507 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.507 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.507 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.507 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.507 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.507 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.507 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.765 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.765 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.765 { 00:18:51.765 "cntlid": 13, 00:18:51.765 "qid": 0, 00:18:51.765 "state": "enabled", 00:18:51.765 "thread": "nvmf_tgt_poll_group_000", 00:18:51.765 "listen_address": { 00:18:51.765 "trtype": "TCP", 00:18:51.765 "adrfam": "IPv4", 00:18:51.765 "traddr": "10.0.0.2", 00:18:51.765 "trsvcid": "4420" 00:18:51.765 }, 00:18:51.765 "peer_address": { 00:18:51.765 "trtype": "TCP", 00:18:51.765 "adrfam": "IPv4", 00:18:51.765 "traddr": "10.0.0.1", 00:18:51.765 "trsvcid": "54592" 00:18:51.765 }, 00:18:51.765 "auth": { 00:18:51.765 "state": "completed", 00:18:51.765 "digest": "sha256", 00:18:51.765 "dhgroup": "ffdhe2048" 00:18:51.765 } 00:18:51.765 } 00:18:51.765 ]' 00:18:51.765 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.765 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:51.765 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.765 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:51.765 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.765 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.765 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.765 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.023 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzQ5ZTA3YWMyMTFiMGZjN2VhY2RkMWI0ZTllOWE0YTIwNjdkNTQxZTMyN2RjY2ZhPE/SUA==: --dhchap-ctrl-secret DHHC-1:01:YTU3MjhjYjQ1ZmQ1Y2MzNjFiNzMyOTIyM2Q2YmJiNmJ2uRg5: 00:18:52.956 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.956 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:52.956 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.956 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.214 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.214 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.214 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:53.214 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:53.214 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:53.214 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.214 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:53.214 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:53.214 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:53.214 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.214 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:53.214 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.214 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.214 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.214 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.214 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.780 00:18:53.780 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.780 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.780 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.038 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.038 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.038 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.038 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.038 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.038 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.038 { 00:18:54.038 "cntlid": 15, 00:18:54.038 "qid": 0, 00:18:54.038 "state": "enabled", 00:18:54.038 "thread": "nvmf_tgt_poll_group_000", 00:18:54.038 "listen_address": { 00:18:54.038 "trtype": "TCP", 00:18:54.038 "adrfam": "IPv4", 00:18:54.038 "traddr": "10.0.0.2", 00:18:54.038 "trsvcid": "4420" 00:18:54.038 }, 00:18:54.038 "peer_address": { 00:18:54.038 "trtype": "TCP", 00:18:54.038 "adrfam": "IPv4", 00:18:54.038 "traddr": "10.0.0.1", 00:18:54.038 "trsvcid": "54612" 00:18:54.038 }, 00:18:54.038 "auth": { 00:18:54.038 "state": "completed", 00:18:54.038 "digest": "sha256", 00:18:54.038 "dhgroup": "ffdhe2048" 00:18:54.038 } 00:18:54.038 } 00:18:54.038 ]' 00:18:54.038 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.038 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.038 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.038 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:54.038 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.038 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.038 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.038 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.296 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yjc1YTMwMDQ0MWY3ODA4MjE1NTZkYmM4ZTNmNmY5NjA3NGNmYjhhYzQ4ODc4N2VhMDY4Y2Q3MGQ0MzM3Yjc5N7wWLoc=: 00:18:55.229 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.229 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:55.229 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.229 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.229 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.229 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:55.229 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.229 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:55.229 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:55.487 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:55.487 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.487 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:55.487 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:55.487 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:55.487 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.487 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.487 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.487 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.487 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.487 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.487 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.746 00:18:55.746 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.746 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.746 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.003 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.003 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.003 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.003 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.003 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.003 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.003 { 00:18:56.003 "cntlid": 17, 00:18:56.003 "qid": 0, 00:18:56.003 "state": "enabled", 00:18:56.003 "thread": "nvmf_tgt_poll_group_000", 00:18:56.004 "listen_address": { 00:18:56.004 "trtype": "TCP", 00:18:56.004 "adrfam": "IPv4", 00:18:56.004 "traddr": "10.0.0.2", 00:18:56.004 "trsvcid": "4420" 00:18:56.004 }, 00:18:56.004 "peer_address": { 00:18:56.004 "trtype": "TCP", 00:18:56.004 "adrfam": "IPv4", 00:18:56.004 "traddr": "10.0.0.1", 00:18:56.004 "trsvcid": "54640" 00:18:56.004 }, 00:18:56.004 "auth": { 00:18:56.004 "state": "completed", 00:18:56.004 "digest": "sha256", 00:18:56.004 "dhgroup": "ffdhe3072" 00:18:56.004 } 00:18:56.004 } 00:18:56.004 ]' 00:18:56.004 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.261 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:56.261 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.261 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:56.261 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.261 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.261 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.261 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.519 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmY5ZTA0ODY0NDEzMWQyNzFjNWU4NmM3NjJjYzlhZjRjZWE1MDM5NzBkM2M2NmE4NwRzgw==: --dhchap-ctrl-secret DHHC-1:03:MzhhNjExNjFkNTBiYzBkZDdmNzJjZTYwMjg2MTNiMjRlYTcxNGY3MWRjZTU5NWE5MjUwNjAxNWExNjliZTc4NYPihnI=: 00:18:57.453 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.453 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.453 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.453 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.453 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.453 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.453 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:57.453 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:57.711 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:57.711 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.711 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:57.711 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:57.711 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:57.711 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.711 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.711 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.711 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.711 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.711 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.711 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.969 00:18:57.969 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.969 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.969 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.227 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.228 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.228 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.228 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.228 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.228 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.228 { 00:18:58.228 "cntlid": 19, 00:18:58.228 "qid": 0, 00:18:58.228 "state": "enabled", 00:18:58.228 "thread": "nvmf_tgt_poll_group_000", 00:18:58.228 "listen_address": { 00:18:58.228 "trtype": "TCP", 00:18:58.228 "adrfam": "IPv4", 00:18:58.228 "traddr": "10.0.0.2", 00:18:58.228 "trsvcid": "4420" 00:18:58.228 }, 00:18:58.228 "peer_address": { 00:18:58.228 "trtype": "TCP", 00:18:58.228 "adrfam": "IPv4", 00:18:58.228 "traddr": "10.0.0.1", 00:18:58.228 "trsvcid": "54674" 00:18:58.228 }, 00:18:58.228 "auth": { 00:18:58.228 "state": "completed", 00:18:58.228 "digest": "sha256", 00:18:58.228 "dhgroup": "ffdhe3072" 00:18:58.228 } 00:18:58.228 } 00:18:58.228 ]' 00:18:58.228 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.228 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.228 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.486 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:58.486 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.486 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.486 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.486 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.744 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDU4NTQxZTZiZDcxYjg4YjI4OWUyMTNhYmQ3NGIxM2OP7Tpp: --dhchap-ctrl-secret DHHC-1:02:OWY5ZjYyYjgxYWU1MGMzMzFhMTFkM2EwMGFlMmNiZWIxOGQ3MWNjMzM4ZDlmNDY1BHm2Aw==: 00:18:59.717 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.717 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:59.717 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.717 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.717 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.717 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.717 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:59.717 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:59.975 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:59.975 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.975 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:59.975 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:59.975 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:59.975 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.975 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.975 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.975 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.975 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.975 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.975 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.233 00:19:00.233 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.233 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.233 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.491 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.491 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.491 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.491 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.491 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.491 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.492 { 00:19:00.492 "cntlid": 21, 00:19:00.492 "qid": 0, 00:19:00.492 "state": "enabled", 00:19:00.492 "thread": "nvmf_tgt_poll_group_000", 00:19:00.492 "listen_address": { 00:19:00.492 "trtype": "TCP", 00:19:00.492 "adrfam": "IPv4", 00:19:00.492 "traddr": "10.0.0.2", 00:19:00.492 "trsvcid": "4420" 00:19:00.492 }, 00:19:00.492 "peer_address": { 00:19:00.492 "trtype": "TCP", 00:19:00.492 "adrfam": "IPv4", 00:19:00.492 "traddr": "10.0.0.1", 00:19:00.492 "trsvcid": "60480" 00:19:00.492 }, 00:19:00.492 "auth": { 00:19:00.492 "state": "completed", 00:19:00.492 "digest": "sha256", 00:19:00.492 "dhgroup": "ffdhe3072" 00:19:00.492 } 00:19:00.492 } 00:19:00.492 ]' 00:19:00.492 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.492 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.492 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.492 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:00.492 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.492 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.492 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.492 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.750 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzQ5ZTA3YWMyMTFiMGZjN2VhY2RkMWI0ZTllOWE0YTIwNjdkNTQxZTMyN2RjY2ZhPE/SUA==: --dhchap-ctrl-secret DHHC-1:01:YTU3MjhjYjQ1ZmQ1Y2MzNjFiNzMyOTIyM2Q2YmJiNmJ2uRg5: 00:19:01.683 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.941 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:01.941 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.941 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.941 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.941 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.941 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:01.941 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:02.199 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:02.199 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.199 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:02.199 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:02.199 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:02.199 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.199 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:02.199 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.199 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.199 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.199 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.199 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.457 00:19:02.457 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.457 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.457 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.714 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.714 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.714 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.715 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.715 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.715 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.715 { 00:19:02.715 "cntlid": 23, 00:19:02.715 "qid": 0, 00:19:02.715 "state": "enabled", 00:19:02.715 "thread": "nvmf_tgt_poll_group_000", 00:19:02.715 "listen_address": { 00:19:02.715 "trtype": "TCP", 00:19:02.715 "adrfam": "IPv4", 00:19:02.715 "traddr": "10.0.0.2", 00:19:02.715 "trsvcid": "4420" 00:19:02.715 }, 00:19:02.715 "peer_address": { 00:19:02.715 "trtype": "TCP", 00:19:02.715 "adrfam": "IPv4", 00:19:02.715 "traddr": "10.0.0.1", 00:19:02.715 "trsvcid": "60502" 00:19:02.715 }, 00:19:02.715 "auth": { 00:19:02.715 "state": "completed", 00:19:02.715 "digest": "sha256", 00:19:02.715 "dhgroup": "ffdhe3072" 00:19:02.715 } 00:19:02.715 } 00:19:02.715 ]' 00:19:02.715 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.715 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.715 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.974 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:02.974 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.974 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.974 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.974 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.232 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yjc1YTMwMDQ0MWY3ODA4MjE1NTZkYmM4ZTNmNmY5NjA3NGNmYjhhYzQ4ODc4N2VhMDY4Y2Q3MGQ0MzM3Yjc5N7wWLoc=: 00:19:04.165 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.165 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.165 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.165 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.165 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.165 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:04.165 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.166 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:04.166 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:04.423 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:04.423 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.423 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:04.423 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:04.423 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:04.423 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.423 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.423 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.423 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.423 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.423 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.423 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.680 00:19:04.938 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.938 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.938 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.196 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.196 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.196 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.196 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.196 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.196 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.196 { 00:19:05.196 "cntlid": 25, 00:19:05.196 "qid": 0, 00:19:05.196 "state": "enabled", 00:19:05.196 "thread": "nvmf_tgt_poll_group_000", 00:19:05.196 "listen_address": { 00:19:05.196 "trtype": "TCP", 00:19:05.196 "adrfam": "IPv4", 00:19:05.196 "traddr": "10.0.0.2", 00:19:05.196 "trsvcid": "4420" 00:19:05.196 }, 00:19:05.196 "peer_address": { 00:19:05.196 "trtype": "TCP", 00:19:05.196 "adrfam": "IPv4", 00:19:05.196 "traddr": "10.0.0.1", 00:19:05.196 "trsvcid": "60524" 00:19:05.196 }, 00:19:05.196 "auth": { 00:19:05.196 "state": "completed", 00:19:05.196 "digest": "sha256", 00:19:05.196 "dhgroup": "ffdhe4096" 00:19:05.196 } 00:19:05.196 } 00:19:05.196 ]' 00:19:05.196 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.196 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.196 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.196 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:05.196 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.196 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.196 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.196 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.454 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmY5ZTA0ODY0NDEzMWQyNzFjNWU4NmM3NjJjYzlhZjRjZWE1MDM5NzBkM2M2NmE4NwRzgw==: --dhchap-ctrl-secret DHHC-1:03:MzhhNjExNjFkNTBiYzBkZDdmNzJjZTYwMjg2MTNiMjRlYTcxNGY3MWRjZTU5NWE5MjUwNjAxNWExNjliZTc4NYPihnI=: 00:19:06.387 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.387 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.387 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.387 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.387 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.387 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.387 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:06.387 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:06.645 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:06.645 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.645 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:06.645 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:06.645 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:06.645 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.645 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.645 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.645 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.645 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.645 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.645 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.212 00:19:07.212 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.212 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.212 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.470 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.470 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.470 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.470 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.470 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.470 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.470 { 00:19:07.470 "cntlid": 27, 00:19:07.470 "qid": 0, 00:19:07.470 "state": "enabled", 00:19:07.470 "thread": "nvmf_tgt_poll_group_000", 00:19:07.470 "listen_address": { 00:19:07.470 "trtype": "TCP", 00:19:07.470 "adrfam": "IPv4", 00:19:07.470 "traddr": "10.0.0.2", 00:19:07.470 "trsvcid": "4420" 00:19:07.470 }, 00:19:07.470 "peer_address": { 00:19:07.470 "trtype": "TCP", 00:19:07.470 "adrfam": "IPv4", 00:19:07.470 "traddr": "10.0.0.1", 00:19:07.470 "trsvcid": "60548" 00:19:07.470 }, 00:19:07.470 "auth": { 00:19:07.470 "state": "completed", 00:19:07.470 "digest": "sha256", 00:19:07.470 "dhgroup": "ffdhe4096" 00:19:07.470 } 00:19:07.470 } 00:19:07.470 ]' 00:19:07.470 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.470 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:07.470 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.470 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:07.470 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.470 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.470 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.470 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.728 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDU4NTQxZTZiZDcxYjg4YjI4OWUyMTNhYmQ3NGIxM2OP7Tpp: --dhchap-ctrl-secret DHHC-1:02:OWY5ZjYyYjgxYWU1MGMzMzFhMTFkM2EwMGFlMmNiZWIxOGQ3MWNjMzM4ZDlmNDY1BHm2Aw==: 00:19:08.662 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.921 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.921 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.921 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.921 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.921 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.921 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:08.921 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:09.179 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:09.179 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.179 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:09.179 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:09.179 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:09.179 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.179 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.179 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.179 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.179 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.179 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.179 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.437 00:19:09.437 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.437 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.437 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.695 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.695 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.695 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.695 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.695 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.695 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.695 { 00:19:09.695 "cntlid": 29, 00:19:09.695 "qid": 0, 00:19:09.695 "state": "enabled", 00:19:09.696 "thread": "nvmf_tgt_poll_group_000", 00:19:09.696 "listen_address": { 00:19:09.696 "trtype": "TCP", 00:19:09.696 "adrfam": "IPv4", 00:19:09.696 "traddr": "10.0.0.2", 00:19:09.696 "trsvcid": "4420" 00:19:09.696 }, 00:19:09.696 "peer_address": { 00:19:09.696 "trtype": "TCP", 00:19:09.696 "adrfam": "IPv4", 00:19:09.696 "traddr": "10.0.0.1", 00:19:09.696 "trsvcid": "59388" 00:19:09.696 }, 00:19:09.696 "auth": { 00:19:09.696 "state": "completed", 00:19:09.696 "digest": "sha256", 00:19:09.696 "dhgroup": "ffdhe4096" 00:19:09.696 } 00:19:09.696 } 00:19:09.696 ]' 00:19:09.696 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.696 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:09.696 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.953 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:09.953 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.953 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.953 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.953 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.211 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzQ5ZTA3YWMyMTFiMGZjN2VhY2RkMWI0ZTllOWE0YTIwNjdkNTQxZTMyN2RjY2ZhPE/SUA==: --dhchap-ctrl-secret DHHC-1:01:YTU3MjhjYjQ1ZmQ1Y2MzNjFiNzMyOTIyM2Q2YmJiNmJ2uRg5: 00:19:11.143 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.143 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:11.143 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.143 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.143 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.143 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.143 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:11.143 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:11.400 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:11.400 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.400 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:11.400 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:11.401 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:11.401 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.401 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:11.401 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.401 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.401 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.401 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.401 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.965 00:19:11.965 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.965 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.965 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.222 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.222 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.222 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.222 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.222 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.222 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.222 { 00:19:12.222 "cntlid": 31, 00:19:12.222 "qid": 0, 00:19:12.222 "state": "enabled", 00:19:12.222 "thread": "nvmf_tgt_poll_group_000", 00:19:12.222 "listen_address": { 00:19:12.222 "trtype": "TCP", 00:19:12.222 "adrfam": "IPv4", 00:19:12.222 "traddr": "10.0.0.2", 00:19:12.222 "trsvcid": "4420" 00:19:12.222 }, 00:19:12.222 "peer_address": { 00:19:12.222 "trtype": "TCP", 00:19:12.222 "adrfam": "IPv4", 00:19:12.222 "traddr": "10.0.0.1", 00:19:12.222 "trsvcid": "59408" 00:19:12.222 }, 00:19:12.222 "auth": { 00:19:12.222 "state": "completed", 00:19:12.222 "digest": "sha256", 00:19:12.222 "dhgroup": "ffdhe4096" 00:19:12.222 } 00:19:12.222 } 00:19:12.222 ]' 00:19:12.222 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.222 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:12.222 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.222 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:12.222 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.222 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.222 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.222 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.478 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yjc1YTMwMDQ0MWY3ODA4MjE1NTZkYmM4ZTNmNmY5NjA3NGNmYjhhYzQ4ODc4N2VhMDY4Y2Q3MGQ0MzM3Yjc5N7wWLoc=: 00:19:13.409 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.410 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:13.410 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.410 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.410 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.410 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.410 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.410 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:13.410 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:13.667 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:13.667 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.667 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:13.667 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:13.667 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:13.667 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.667 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.667 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.667 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.667 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.668 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.668 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.305 00:19:14.305 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.305 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.305 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.611 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.611 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.611 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.611 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.611 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.611 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.611 { 00:19:14.611 "cntlid": 33, 00:19:14.611 "qid": 0, 00:19:14.611 "state": "enabled", 00:19:14.611 "thread": "nvmf_tgt_poll_group_000", 00:19:14.611 "listen_address": { 00:19:14.611 "trtype": "TCP", 00:19:14.611 "adrfam": "IPv4", 00:19:14.611 "traddr": "10.0.0.2", 00:19:14.611 "trsvcid": "4420" 00:19:14.611 }, 00:19:14.611 "peer_address": { 00:19:14.611 "trtype": "TCP", 00:19:14.611 "adrfam": "IPv4", 00:19:14.611 "traddr": "10.0.0.1", 00:19:14.611 "trsvcid": "59444" 00:19:14.611 }, 00:19:14.611 "auth": { 00:19:14.611 "state": "completed", 00:19:14.611 "digest": "sha256", 00:19:14.611 "dhgroup": "ffdhe6144" 00:19:14.611 } 00:19:14.611 } 00:19:14.611 ]' 00:19:14.611 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.611 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:14.611 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.611 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:14.611 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.611 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.611 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.611 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.869 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmY5ZTA0ODY0NDEzMWQyNzFjNWU4NmM3NjJjYzlhZjRjZWE1MDM5NzBkM2M2NmE4NwRzgw==: --dhchap-ctrl-secret DHHC-1:03:MzhhNjExNjFkNTBiYzBkZDdmNzJjZTYwMjg2MTNiMjRlYTcxNGY3MWRjZTU5NWE5MjUwNjAxNWExNjliZTc4NYPihnI=: 00:19:15.801 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.801 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:15.801 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.801 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.801 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.801 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.801 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:15.801 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:16.368 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:16.368 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.368 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:16.368 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:16.368 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:16.368 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.368 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.368 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.368 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.368 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.368 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.368 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.934 00:19:16.934 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.934 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.934 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.934 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.934 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.934 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.934 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.934 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.934 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.934 { 00:19:16.934 "cntlid": 35, 00:19:16.934 "qid": 0, 00:19:16.934 "state": "enabled", 00:19:16.934 "thread": "nvmf_tgt_poll_group_000", 00:19:16.934 "listen_address": { 00:19:16.934 "trtype": "TCP", 00:19:16.934 "adrfam": "IPv4", 00:19:16.934 "traddr": "10.0.0.2", 00:19:16.934 "trsvcid": "4420" 00:19:16.934 }, 00:19:16.934 "peer_address": { 00:19:16.934 "trtype": "TCP", 00:19:16.934 "adrfam": "IPv4", 00:19:16.934 "traddr": "10.0.0.1", 00:19:16.934 "trsvcid": "59468" 00:19:16.934 }, 00:19:16.934 "auth": { 00:19:16.934 "state": "completed", 00:19:16.934 "digest": "sha256", 00:19:16.934 "dhgroup": "ffdhe6144" 00:19:16.934 } 00:19:16.934 } 00:19:16.934 ]' 00:19:16.934 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.191 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:17.191 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.191 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:17.191 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.191 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.191 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.191 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.449 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDU4NTQxZTZiZDcxYjg4YjI4OWUyMTNhYmQ3NGIxM2OP7Tpp: --dhchap-ctrl-secret DHHC-1:02:OWY5ZjYyYjgxYWU1MGMzMzFhMTFkM2EwMGFlMmNiZWIxOGQ3MWNjMzM4ZDlmNDY1BHm2Aw==: 00:19:18.379 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.379 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:18.379 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.379 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.379 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.379 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.379 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:18.379 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:18.636 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:18.636 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.636 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:18.636 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:18.636 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:18.636 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.636 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.636 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.636 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.636 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.636 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.637 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.201 00:19:19.201 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.202 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.202 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.459 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.459 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.459 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.459 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.459 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.459 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.459 { 00:19:19.459 "cntlid": 37, 00:19:19.459 "qid": 0, 00:19:19.459 "state": "enabled", 00:19:19.459 "thread": "nvmf_tgt_poll_group_000", 00:19:19.459 "listen_address": { 00:19:19.459 "trtype": "TCP", 00:19:19.459 "adrfam": "IPv4", 00:19:19.459 "traddr": "10.0.0.2", 00:19:19.459 "trsvcid": "4420" 00:19:19.459 }, 00:19:19.459 "peer_address": { 00:19:19.459 "trtype": "TCP", 00:19:19.459 "adrfam": "IPv4", 00:19:19.459 "traddr": "10.0.0.1", 00:19:19.459 "trsvcid": "58100" 00:19:19.459 }, 00:19:19.459 "auth": { 00:19:19.459 "state": "completed", 00:19:19.459 "digest": "sha256", 00:19:19.459 "dhgroup": "ffdhe6144" 00:19:19.459 } 00:19:19.459 } 00:19:19.459 ]' 00:19:19.459 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.459 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.459 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.459 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:19.459 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.717 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.717 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.717 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.974 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzQ5ZTA3YWMyMTFiMGZjN2VhY2RkMWI0ZTllOWE0YTIwNjdkNTQxZTMyN2RjY2ZhPE/SUA==: --dhchap-ctrl-secret DHHC-1:01:YTU3MjhjYjQ1ZmQ1Y2MzNjFiNzMyOTIyM2Q2YmJiNmJ2uRg5: 00:19:20.907 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.907 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.907 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.907 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.907 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.907 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.907 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:20.907 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:21.164 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:21.164 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.164 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:21.164 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:21.164 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:21.164 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.164 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:21.164 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.164 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.164 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.164 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.164 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.729 00:19:21.729 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.729 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.729 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.987 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.987 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.987 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.987 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.987 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.987 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.987 { 00:19:21.987 "cntlid": 39, 00:19:21.987 "qid": 0, 00:19:21.987 "state": "enabled", 00:19:21.987 "thread": "nvmf_tgt_poll_group_000", 00:19:21.987 "listen_address": { 00:19:21.987 "trtype": "TCP", 00:19:21.987 "adrfam": "IPv4", 00:19:21.987 "traddr": "10.0.0.2", 00:19:21.987 "trsvcid": "4420" 00:19:21.987 }, 00:19:21.987 "peer_address": { 00:19:21.987 "trtype": "TCP", 00:19:21.987 "adrfam": "IPv4", 00:19:21.987 "traddr": "10.0.0.1", 00:19:21.987 "trsvcid": "58122" 00:19:21.987 }, 00:19:21.987 "auth": { 00:19:21.987 "state": "completed", 00:19:21.987 "digest": "sha256", 00:19:21.987 "dhgroup": "ffdhe6144" 00:19:21.987 } 00:19:21.987 } 00:19:21.987 ]' 00:19:21.987 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.987 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.987 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.987 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:21.987 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.987 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.987 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.987 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.244 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yjc1YTMwMDQ0MWY3ODA4MjE1NTZkYmM4ZTNmNmY5NjA3NGNmYjhhYzQ4ODc4N2VhMDY4Y2Q3MGQ0MzM3Yjc5N7wWLoc=: 00:19:23.616 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.616 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:23.616 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.616 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.616 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.616 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.616 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.616 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:23.616 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:23.616 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:23.616 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.616 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:23.616 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:23.616 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:23.616 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.616 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.616 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.616 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.616 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.616 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.617 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.550 00:19:24.550 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.550 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.550 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.808 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.808 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.808 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.808 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.808 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.808 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.808 { 00:19:24.808 "cntlid": 41, 00:19:24.808 "qid": 0, 00:19:24.808 "state": "enabled", 00:19:24.808 "thread": "nvmf_tgt_poll_group_000", 00:19:24.808 "listen_address": { 00:19:24.808 "trtype": "TCP", 00:19:24.808 "adrfam": "IPv4", 00:19:24.808 "traddr": "10.0.0.2", 00:19:24.808 "trsvcid": "4420" 00:19:24.808 }, 00:19:24.808 "peer_address": { 00:19:24.808 "trtype": "TCP", 00:19:24.808 "adrfam": "IPv4", 00:19:24.808 "traddr": "10.0.0.1", 00:19:24.808 "trsvcid": "58158" 00:19:24.808 }, 00:19:24.808 "auth": { 00:19:24.808 "state": "completed", 00:19:24.808 "digest": "sha256", 00:19:24.808 "dhgroup": "ffdhe8192" 00:19:24.808 } 00:19:24.808 } 00:19:24.808 ]' 00:19:24.808 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.808 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.808 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.808 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:24.808 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.808 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.808 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.808 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.066 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmY5ZTA0ODY0NDEzMWQyNzFjNWU4NmM3NjJjYzlhZjRjZWE1MDM5NzBkM2M2NmE4NwRzgw==: --dhchap-ctrl-secret DHHC-1:03:MzhhNjExNjFkNTBiYzBkZDdmNzJjZTYwMjg2MTNiMjRlYTcxNGY3MWRjZTU5NWE5MjUwNjAxNWExNjliZTc4NYPihnI=: 00:19:26.000 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.000 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:26.000 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.000 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.000 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.000 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.001 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:26.001 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:26.567 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:26.567 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.567 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:26.567 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:26.567 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:26.567 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.567 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.567 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.567 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.567 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.567 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.567 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.501 00:19:27.501 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.501 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.501 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.501 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.501 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.501 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.501 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.501 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.501 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.501 { 00:19:27.501 "cntlid": 43, 00:19:27.501 "qid": 0, 00:19:27.501 "state": "enabled", 00:19:27.501 "thread": "nvmf_tgt_poll_group_000", 00:19:27.501 "listen_address": { 00:19:27.501 "trtype": "TCP", 00:19:27.501 "adrfam": "IPv4", 00:19:27.501 "traddr": "10.0.0.2", 00:19:27.501 "trsvcid": "4420" 00:19:27.501 }, 00:19:27.501 "peer_address": { 00:19:27.501 "trtype": "TCP", 00:19:27.501 "adrfam": "IPv4", 00:19:27.501 "traddr": "10.0.0.1", 00:19:27.501 "trsvcid": "58188" 00:19:27.501 }, 00:19:27.501 "auth": { 00:19:27.501 "state": "completed", 00:19:27.501 "digest": "sha256", 00:19:27.501 "dhgroup": "ffdhe8192" 00:19:27.501 } 00:19:27.501 } 00:19:27.501 ]' 00:19:27.501 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.759 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.759 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.759 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:27.759 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.759 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.759 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.759 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.018 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDU4NTQxZTZiZDcxYjg4YjI4OWUyMTNhYmQ3NGIxM2OP7Tpp: --dhchap-ctrl-secret DHHC-1:02:OWY5ZjYyYjgxYWU1MGMzMzFhMTFkM2EwMGFlMmNiZWIxOGQ3MWNjMzM4ZDlmNDY1BHm2Aw==: 00:19:28.951 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.951 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.951 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.951 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.951 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.951 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.951 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:28.951 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:29.209 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:29.209 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.209 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:29.209 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:29.209 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:29.209 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.209 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.209 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.209 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.209 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.209 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.209 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.165 00:19:30.165 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.165 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.165 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.438 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.438 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.438 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.438 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.438 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.438 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.438 { 00:19:30.438 "cntlid": 45, 00:19:30.438 "qid": 0, 00:19:30.438 "state": "enabled", 00:19:30.438 "thread": "nvmf_tgt_poll_group_000", 00:19:30.438 "listen_address": { 00:19:30.438 "trtype": "TCP", 00:19:30.438 "adrfam": "IPv4", 00:19:30.438 "traddr": "10.0.0.2", 00:19:30.438 "trsvcid": "4420" 00:19:30.438 }, 00:19:30.438 "peer_address": { 00:19:30.438 "trtype": "TCP", 00:19:30.438 "adrfam": "IPv4", 00:19:30.438 "traddr": "10.0.0.1", 00:19:30.438 "trsvcid": "44204" 00:19:30.438 }, 00:19:30.438 "auth": { 00:19:30.438 "state": "completed", 00:19:30.438 "digest": "sha256", 00:19:30.438 "dhgroup": "ffdhe8192" 00:19:30.438 } 00:19:30.438 } 00:19:30.438 ]' 00:19:30.438 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.438 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.438 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.438 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:30.438 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.696 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.696 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.696 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.954 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzQ5ZTA3YWMyMTFiMGZjN2VhY2RkMWI0ZTllOWE0YTIwNjdkNTQxZTMyN2RjY2ZhPE/SUA==: --dhchap-ctrl-secret DHHC-1:01:YTU3MjhjYjQ1ZmQ1Y2MzNjFiNzMyOTIyM2Q2YmJiNmJ2uRg5: 00:19:31.888 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.888 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:31.888 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.888 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.888 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.888 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.888 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:31.888 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:32.146 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:32.146 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.146 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:32.146 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:32.146 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:32.146 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.146 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:32.146 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.146 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.146 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.146 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:32.146 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.079 00:19:33.079 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.079 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.079 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.338 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.338 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.338 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.338 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.338 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.338 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.338 { 00:19:33.338 "cntlid": 47, 00:19:33.338 "qid": 0, 00:19:33.338 "state": "enabled", 00:19:33.338 "thread": "nvmf_tgt_poll_group_000", 00:19:33.338 "listen_address": { 00:19:33.338 "trtype": "TCP", 00:19:33.338 "adrfam": "IPv4", 00:19:33.338 "traddr": "10.0.0.2", 00:19:33.338 "trsvcid": "4420" 00:19:33.338 }, 00:19:33.338 "peer_address": { 00:19:33.338 "trtype": "TCP", 00:19:33.338 "adrfam": "IPv4", 00:19:33.338 "traddr": "10.0.0.1", 00:19:33.338 "trsvcid": "44236" 00:19:33.338 }, 00:19:33.338 "auth": { 00:19:33.338 "state": "completed", 00:19:33.338 "digest": "sha256", 00:19:33.338 "dhgroup": "ffdhe8192" 00:19:33.338 } 00:19:33.338 } 00:19:33.338 ]' 00:19:33.338 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.338 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.338 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.338 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:33.338 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.338 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.338 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.338 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.596 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yjc1YTMwMDQ0MWY3ODA4MjE1NTZkYmM4ZTNmNmY5NjA3NGNmYjhhYzQ4ODc4N2VhMDY4Y2Q3MGQ0MzM3Yjc5N7wWLoc=: 00:19:34.530 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.530 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.530 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.530 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.530 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.530 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:34.530 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.530 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.530 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:34.530 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:34.788 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:34.788 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.788 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:34.788 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:34.788 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:34.788 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.788 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.788 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.788 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.788 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.788 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.788 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.353 00:19:35.353 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.353 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.353 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.353 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.353 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.353 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.353 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.353 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.353 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.353 { 00:19:35.353 "cntlid": 49, 00:19:35.353 "qid": 0, 00:19:35.353 "state": "enabled", 00:19:35.353 "thread": "nvmf_tgt_poll_group_000", 00:19:35.353 "listen_address": { 00:19:35.353 "trtype": "TCP", 00:19:35.353 "adrfam": "IPv4", 00:19:35.353 "traddr": "10.0.0.2", 00:19:35.353 "trsvcid": "4420" 00:19:35.353 }, 00:19:35.353 "peer_address": { 00:19:35.353 "trtype": "TCP", 00:19:35.353 "adrfam": "IPv4", 00:19:35.353 "traddr": "10.0.0.1", 00:19:35.353 "trsvcid": "44252" 00:19:35.353 }, 00:19:35.353 "auth": { 00:19:35.353 "state": "completed", 00:19:35.353 "digest": "sha384", 00:19:35.353 "dhgroup": "null" 00:19:35.353 } 00:19:35.353 } 00:19:35.353 ]' 00:19:35.353 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.611 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:35.611 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.611 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:35.611 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.611 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.611 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.611 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.869 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmY5ZTA0ODY0NDEzMWQyNzFjNWU4NmM3NjJjYzlhZjRjZWE1MDM5NzBkM2M2NmE4NwRzgw==: --dhchap-ctrl-secret DHHC-1:03:MzhhNjExNjFkNTBiYzBkZDdmNzJjZTYwMjg2MTNiMjRlYTcxNGY3MWRjZTU5NWE5MjUwNjAxNWExNjliZTc4NYPihnI=: 00:19:36.803 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.803 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.803 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.803 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.803 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.803 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.803 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:36.803 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:37.061 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:37.061 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.061 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:37.061 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:37.061 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:37.061 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.061 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.061 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.061 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.061 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.061 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.061 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.319 00:19:37.319 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.319 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.319 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.577 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.577 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.577 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.577 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.577 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.577 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.577 { 00:19:37.577 "cntlid": 51, 00:19:37.577 "qid": 0, 00:19:37.577 "state": "enabled", 00:19:37.577 "thread": "nvmf_tgt_poll_group_000", 00:19:37.577 "listen_address": { 00:19:37.577 "trtype": "TCP", 00:19:37.577 "adrfam": "IPv4", 00:19:37.577 "traddr": "10.0.0.2", 00:19:37.577 "trsvcid": "4420" 00:19:37.577 }, 00:19:37.577 "peer_address": { 00:19:37.577 "trtype": "TCP", 00:19:37.577 "adrfam": "IPv4", 00:19:37.577 "traddr": "10.0.0.1", 00:19:37.577 "trsvcid": "44282" 00:19:37.577 }, 00:19:37.577 "auth": { 00:19:37.577 "state": "completed", 00:19:37.577 "digest": "sha384", 00:19:37.577 "dhgroup": "null" 00:19:37.577 } 00:19:37.577 } 00:19:37.577 ]' 00:19:37.577 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.577 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:37.577 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.577 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:37.577 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.834 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.834 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.834 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.092 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDU4NTQxZTZiZDcxYjg4YjI4OWUyMTNhYmQ3NGIxM2OP7Tpp: --dhchap-ctrl-secret DHHC-1:02:OWY5ZjYyYjgxYWU1MGMzMzFhMTFkM2EwMGFlMmNiZWIxOGQ3MWNjMzM4ZDlmNDY1BHm2Aw==: 00:19:39.027 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.027 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.027 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.027 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.027 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.027 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.027 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:39.027 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:39.285 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:39.285 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.285 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:39.285 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:39.285 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:39.285 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.285 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.285 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.285 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.285 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.285 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.285 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.543 00:19:39.543 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.543 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.543 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.801 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.801 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.801 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.801 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.801 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.801 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.801 { 00:19:39.801 "cntlid": 53, 00:19:39.801 "qid": 0, 00:19:39.801 "state": "enabled", 00:19:39.801 "thread": "nvmf_tgt_poll_group_000", 00:19:39.801 "listen_address": { 00:19:39.801 "trtype": "TCP", 00:19:39.801 "adrfam": "IPv4", 00:19:39.801 "traddr": "10.0.0.2", 00:19:39.801 "trsvcid": "4420" 00:19:39.801 }, 00:19:39.801 "peer_address": { 00:19:39.801 "trtype": "TCP", 00:19:39.801 "adrfam": "IPv4", 00:19:39.801 "traddr": "10.0.0.1", 00:19:39.801 "trsvcid": "51936" 00:19:39.801 }, 00:19:39.801 "auth": { 00:19:39.801 "state": "completed", 00:19:39.801 "digest": "sha384", 00:19:39.801 "dhgroup": "null" 00:19:39.801 } 00:19:39.801 } 00:19:39.801 ]' 00:19:39.801 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.801 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:39.801 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.801 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:39.801 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.801 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.801 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.801 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.059 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzQ5ZTA3YWMyMTFiMGZjN2VhY2RkMWI0ZTllOWE0YTIwNjdkNTQxZTMyN2RjY2ZhPE/SUA==: --dhchap-ctrl-secret DHHC-1:01:YTU3MjhjYjQ1ZmQ1Y2MzNjFiNzMyOTIyM2Q2YmJiNmJ2uRg5: 00:19:41.434 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.434 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.434 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.434 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.434 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.434 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.434 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:41.434 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:41.434 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:41.434 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.434 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:41.434 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:41.434 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:41.434 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.434 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:41.434 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.434 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.434 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.434 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:41.434 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:41.692 00:19:41.692 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.692 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.692 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.950 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.950 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.950 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.950 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.950 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.950 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.950 { 00:19:41.950 "cntlid": 55, 00:19:41.950 "qid": 0, 00:19:41.950 "state": "enabled", 00:19:41.950 "thread": "nvmf_tgt_poll_group_000", 00:19:41.950 "listen_address": { 00:19:41.950 "trtype": "TCP", 00:19:41.950 "adrfam": "IPv4", 00:19:41.950 "traddr": "10.0.0.2", 00:19:41.950 "trsvcid": "4420" 00:19:41.950 }, 00:19:41.950 "peer_address": { 00:19:41.950 "trtype": "TCP", 00:19:41.950 "adrfam": "IPv4", 00:19:41.950 "traddr": "10.0.0.1", 00:19:41.950 "trsvcid": "51968" 00:19:41.950 }, 00:19:41.950 "auth": { 00:19:41.950 "state": "completed", 00:19:41.950 "digest": "sha384", 00:19:41.950 "dhgroup": "null" 00:19:41.950 } 00:19:41.950 } 00:19:41.950 ]' 00:19:41.950 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.207 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:42.207 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.207 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:42.207 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.207 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.207 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.207 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.464 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yjc1YTMwMDQ0MWY3ODA4MjE1NTZkYmM4ZTNmNmY5NjA3NGNmYjhhYzQ4ODc4N2VhMDY4Y2Q3MGQ0MzM3Yjc5N7wWLoc=: 00:19:43.394 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.395 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.395 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.395 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.395 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.395 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:43.395 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.395 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:43.395 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:43.652 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:43.652 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.652 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:43.652 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:43.652 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:43.652 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.652 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.652 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.652 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.652 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.652 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.652 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.909 00:19:43.909 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.909 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.909 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.166 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.166 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.166 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.166 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.166 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.166 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.167 { 00:19:44.167 "cntlid": 57, 00:19:44.167 "qid": 0, 00:19:44.167 "state": "enabled", 00:19:44.167 "thread": "nvmf_tgt_poll_group_000", 00:19:44.167 "listen_address": { 00:19:44.167 "trtype": "TCP", 00:19:44.167 "adrfam": "IPv4", 00:19:44.167 "traddr": "10.0.0.2", 00:19:44.167 "trsvcid": "4420" 00:19:44.167 }, 00:19:44.167 "peer_address": { 00:19:44.167 "trtype": "TCP", 00:19:44.167 "adrfam": "IPv4", 00:19:44.167 "traddr": "10.0.0.1", 00:19:44.167 "trsvcid": "51996" 00:19:44.167 }, 00:19:44.167 "auth": { 00:19:44.167 "state": "completed", 00:19:44.167 "digest": "sha384", 00:19:44.167 "dhgroup": "ffdhe2048" 00:19:44.167 } 00:19:44.167 } 00:19:44.167 ]' 00:19:44.167 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.424 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:44.424 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.424 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:44.424 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.424 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.424 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.424 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.682 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmY5ZTA0ODY0NDEzMWQyNzFjNWU4NmM3NjJjYzlhZjRjZWE1MDM5NzBkM2M2NmE4NwRzgw==: --dhchap-ctrl-secret DHHC-1:03:MzhhNjExNjFkNTBiYzBkZDdmNzJjZTYwMjg2MTNiMjRlYTcxNGY3MWRjZTU5NWE5MjUwNjAxNWExNjliZTc4NYPihnI=: 00:19:45.644 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.644 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.644 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.644 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.644 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.644 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.644 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:45.644 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:45.902 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:45.902 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.902 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:45.902 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:45.902 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:45.902 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.902 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.902 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.902 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.902 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.902 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.902 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.160 00:19:46.160 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.160 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.160 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.434 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.434 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.434 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.434 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.434 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.434 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.434 { 00:19:46.434 "cntlid": 59, 00:19:46.434 "qid": 0, 00:19:46.434 "state": "enabled", 00:19:46.434 "thread": "nvmf_tgt_poll_group_000", 00:19:46.434 "listen_address": { 00:19:46.434 "trtype": "TCP", 00:19:46.434 "adrfam": "IPv4", 00:19:46.434 "traddr": "10.0.0.2", 00:19:46.434 "trsvcid": "4420" 00:19:46.434 }, 00:19:46.434 "peer_address": { 00:19:46.434 "trtype": "TCP", 00:19:46.434 "adrfam": "IPv4", 00:19:46.434 "traddr": "10.0.0.1", 00:19:46.434 "trsvcid": "52016" 00:19:46.434 }, 00:19:46.434 "auth": { 00:19:46.434 "state": "completed", 00:19:46.434 "digest": "sha384", 00:19:46.434 "dhgroup": "ffdhe2048" 00:19:46.434 } 00:19:46.434 } 00:19:46.434 ]' 00:19:46.434 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.434 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:46.434 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.691 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:46.691 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.691 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.691 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.691 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.949 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDU4NTQxZTZiZDcxYjg4YjI4OWUyMTNhYmQ3NGIxM2OP7Tpp: --dhchap-ctrl-secret DHHC-1:02:OWY5ZjYyYjgxYWU1MGMzMzFhMTFkM2EwMGFlMmNiZWIxOGQ3MWNjMzM4ZDlmNDY1BHm2Aw==: 00:19:47.881 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.881 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.881 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.881 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.881 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.881 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.881 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:47.881 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:48.138 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:48.138 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.138 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:48.138 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:48.138 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:48.138 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.138 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.138 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.138 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.138 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.138 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.138 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.397 00:19:48.397 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.397 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.397 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.655 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.655 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.655 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.655 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.655 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.655 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.655 { 00:19:48.655 "cntlid": 61, 00:19:48.655 "qid": 0, 00:19:48.655 "state": "enabled", 00:19:48.655 "thread": "nvmf_tgt_poll_group_000", 00:19:48.655 "listen_address": { 00:19:48.655 "trtype": "TCP", 00:19:48.655 "adrfam": "IPv4", 00:19:48.655 "traddr": "10.0.0.2", 00:19:48.655 "trsvcid": "4420" 00:19:48.655 }, 00:19:48.655 "peer_address": { 00:19:48.655 "trtype": "TCP", 00:19:48.655 "adrfam": "IPv4", 00:19:48.655 "traddr": "10.0.0.1", 00:19:48.655 "trsvcid": "55556" 00:19:48.655 }, 00:19:48.655 "auth": { 00:19:48.655 "state": "completed", 00:19:48.655 "digest": "sha384", 00:19:48.655 "dhgroup": "ffdhe2048" 00:19:48.655 } 00:19:48.655 } 00:19:48.655 ]' 00:19:48.655 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.655 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:48.655 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.913 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:48.913 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.913 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.913 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.913 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.172 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzQ5ZTA3YWMyMTFiMGZjN2VhY2RkMWI0ZTllOWE0YTIwNjdkNTQxZTMyN2RjY2ZhPE/SUA==: --dhchap-ctrl-secret DHHC-1:01:YTU3MjhjYjQ1ZmQ1Y2MzNjFiNzMyOTIyM2Q2YmJiNmJ2uRg5: 00:19:50.106 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.106 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.106 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.106 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.106 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.106 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.106 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:50.106 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:50.364 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:50.364 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.364 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:50.364 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:50.364 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:50.364 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.364 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:50.364 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.364 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.364 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.364 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.364 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.621 00:19:50.621 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.621 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.621 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.879 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.879 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.879 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.879 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.879 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.879 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.879 { 00:19:50.879 "cntlid": 63, 00:19:50.879 "qid": 0, 00:19:50.879 "state": "enabled", 00:19:50.879 "thread": "nvmf_tgt_poll_group_000", 00:19:50.879 "listen_address": { 00:19:50.879 "trtype": "TCP", 00:19:50.879 "adrfam": "IPv4", 00:19:50.879 "traddr": "10.0.0.2", 00:19:50.879 "trsvcid": "4420" 00:19:50.879 }, 00:19:50.879 "peer_address": { 00:19:50.879 "trtype": "TCP", 00:19:50.879 "adrfam": "IPv4", 00:19:50.879 "traddr": "10.0.0.1", 00:19:50.879 "trsvcid": "55586" 00:19:50.879 }, 00:19:50.879 "auth": { 00:19:50.879 "state": "completed", 00:19:50.879 "digest": "sha384", 00:19:50.879 "dhgroup": "ffdhe2048" 00:19:50.879 } 00:19:50.879 } 00:19:50.879 ]' 00:19:50.879 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.879 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:50.879 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.879 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:50.879 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.879 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.879 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.879 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.136 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yjc1YTMwMDQ0MWY3ODA4MjE1NTZkYmM4ZTNmNmY5NjA3NGNmYjhhYzQ4ODc4N2VhMDY4Y2Q3MGQ0MzM3Yjc5N7wWLoc=: 00:19:52.506 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.506 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.506 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.506 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.506 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.506 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.506 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.506 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:52.506 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:52.506 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:52.506 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.506 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:52.506 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:52.506 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:52.506 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.506 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.506 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.506 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.506 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.506 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.506 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.764 00:19:52.764 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.764 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.764 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.021 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.021 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.021 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.021 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.021 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.021 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.021 { 00:19:53.021 "cntlid": 65, 00:19:53.021 "qid": 0, 00:19:53.021 "state": "enabled", 00:19:53.021 "thread": "nvmf_tgt_poll_group_000", 00:19:53.021 "listen_address": { 00:19:53.021 "trtype": "TCP", 00:19:53.021 "adrfam": "IPv4", 00:19:53.021 "traddr": "10.0.0.2", 00:19:53.021 "trsvcid": "4420" 00:19:53.021 }, 00:19:53.021 "peer_address": { 00:19:53.021 "trtype": "TCP", 00:19:53.021 "adrfam": "IPv4", 00:19:53.021 "traddr": "10.0.0.1", 00:19:53.021 "trsvcid": "55616" 00:19:53.021 }, 00:19:53.021 "auth": { 00:19:53.021 "state": "completed", 00:19:53.021 "digest": "sha384", 00:19:53.021 "dhgroup": "ffdhe3072" 00:19:53.021 } 00:19:53.021 } 00:19:53.021 ]' 00:19:53.021 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.021 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:53.021 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.278 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:53.278 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.278 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.278 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.278 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.536 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmY5ZTA0ODY0NDEzMWQyNzFjNWU4NmM3NjJjYzlhZjRjZWE1MDM5NzBkM2M2NmE4NwRzgw==: --dhchap-ctrl-secret DHHC-1:03:MzhhNjExNjFkNTBiYzBkZDdmNzJjZTYwMjg2MTNiMjRlYTcxNGY3MWRjZTU5NWE5MjUwNjAxNWExNjliZTc4NYPihnI=: 00:19:54.469 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.469 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.469 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.469 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.469 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.469 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.469 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:54.469 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:54.727 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:54.727 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.727 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:54.727 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:54.727 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:54.727 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.727 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.727 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.727 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.727 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.727 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.727 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.985 00:19:54.985 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.985 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.985 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.243 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.243 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.244 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.244 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.244 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.244 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.244 { 00:19:55.244 "cntlid": 67, 00:19:55.244 "qid": 0, 00:19:55.244 "state": "enabled", 00:19:55.244 "thread": "nvmf_tgt_poll_group_000", 00:19:55.244 "listen_address": { 00:19:55.244 "trtype": "TCP", 00:19:55.244 "adrfam": "IPv4", 00:19:55.244 "traddr": "10.0.0.2", 00:19:55.244 "trsvcid": "4420" 00:19:55.244 }, 00:19:55.244 "peer_address": { 00:19:55.244 "trtype": "TCP", 00:19:55.244 "adrfam": "IPv4", 00:19:55.244 "traddr": "10.0.0.1", 00:19:55.244 "trsvcid": "55658" 00:19:55.244 }, 00:19:55.244 "auth": { 00:19:55.244 "state": "completed", 00:19:55.244 "digest": "sha384", 00:19:55.244 "dhgroup": "ffdhe3072" 00:19:55.244 } 00:19:55.244 } 00:19:55.244 ]' 00:19:55.244 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.501 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:55.501 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.501 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:55.501 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.501 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.501 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.501 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.760 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDU4NTQxZTZiZDcxYjg4YjI4OWUyMTNhYmQ3NGIxM2OP7Tpp: --dhchap-ctrl-secret DHHC-1:02:OWY5ZjYyYjgxYWU1MGMzMzFhMTFkM2EwMGFlMmNiZWIxOGQ3MWNjMzM4ZDlmNDY1BHm2Aw==: 00:19:56.692 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.692 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.692 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.692 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.692 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.692 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.692 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:56.692 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:56.951 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:56.951 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.951 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:56.951 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:56.951 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:56.951 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.951 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.951 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.951 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.951 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.951 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.951 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.210 00:19:57.210 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.210 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.210 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.468 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.468 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.468 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.468 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.468 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.468 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.468 { 00:19:57.468 "cntlid": 69, 00:19:57.468 "qid": 0, 00:19:57.468 "state": "enabled", 00:19:57.468 "thread": "nvmf_tgt_poll_group_000", 00:19:57.468 "listen_address": { 00:19:57.468 "trtype": "TCP", 00:19:57.468 "adrfam": "IPv4", 00:19:57.468 "traddr": "10.0.0.2", 00:19:57.468 "trsvcid": "4420" 00:19:57.468 }, 00:19:57.468 "peer_address": { 00:19:57.468 "trtype": "TCP", 00:19:57.468 "adrfam": "IPv4", 00:19:57.468 "traddr": "10.0.0.1", 00:19:57.468 "trsvcid": "55696" 00:19:57.468 }, 00:19:57.468 "auth": { 00:19:57.468 "state": "completed", 00:19:57.468 "digest": "sha384", 00:19:57.468 "dhgroup": "ffdhe3072" 00:19:57.468 } 00:19:57.468 } 00:19:57.468 ]' 00:19:57.468 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.468 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:57.468 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.727 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:57.727 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.727 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.727 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.727 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.986 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzQ5ZTA3YWMyMTFiMGZjN2VhY2RkMWI0ZTllOWE0YTIwNjdkNTQxZTMyN2RjY2ZhPE/SUA==: --dhchap-ctrl-secret DHHC-1:01:YTU3MjhjYjQ1ZmQ1Y2MzNjFiNzMyOTIyM2Q2YmJiNmJ2uRg5: 00:19:58.921 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.921 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.921 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.921 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.921 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.921 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.921 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:58.921 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:59.179 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:59.179 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.179 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:59.179 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:59.179 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:59.179 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.179 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:59.179 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.179 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.179 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.179 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:59.179 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:59.437 00:19:59.437 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.437 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.438 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.696 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.696 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.696 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.696 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.696 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.696 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.696 { 00:19:59.696 "cntlid": 71, 00:19:59.696 "qid": 0, 00:19:59.696 "state": "enabled", 00:19:59.696 "thread": "nvmf_tgt_poll_group_000", 00:19:59.696 "listen_address": { 00:19:59.696 "trtype": "TCP", 00:19:59.696 "adrfam": "IPv4", 00:19:59.696 "traddr": "10.0.0.2", 00:19:59.696 "trsvcid": "4420" 00:19:59.696 }, 00:19:59.696 "peer_address": { 00:19:59.696 "trtype": "TCP", 00:19:59.696 "adrfam": "IPv4", 00:19:59.696 "traddr": "10.0.0.1", 00:19:59.696 "trsvcid": "49986" 00:19:59.696 }, 00:19:59.696 "auth": { 00:19:59.696 "state": "completed", 00:19:59.696 "digest": "sha384", 00:19:59.696 "dhgroup": "ffdhe3072" 00:19:59.696 } 00:19:59.696 } 00:19:59.696 ]' 00:19:59.696 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.696 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:59.696 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.954 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:59.954 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.954 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.954 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.954 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.237 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yjc1YTMwMDQ0MWY3ODA4MjE1NTZkYmM4ZTNmNmY5NjA3NGNmYjhhYzQ4ODc4N2VhMDY4Y2Q3MGQ0MzM3Yjc5N7wWLoc=: 00:20:01.174 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.174 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.174 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.174 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.174 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.174 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.174 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.174 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:01.174 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:01.433 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:01.433 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.433 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:01.433 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:01.433 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:01.433 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.433 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.433 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.433 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.433 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.433 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.433 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.001 00:20:02.001 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.001 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.001 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.258 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.258 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.258 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.258 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.258 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.258 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.258 { 00:20:02.258 "cntlid": 73, 00:20:02.258 "qid": 0, 00:20:02.258 "state": "enabled", 00:20:02.258 "thread": "nvmf_tgt_poll_group_000", 00:20:02.258 "listen_address": { 00:20:02.258 "trtype": "TCP", 00:20:02.258 "adrfam": "IPv4", 00:20:02.258 "traddr": "10.0.0.2", 00:20:02.258 "trsvcid": "4420" 00:20:02.258 }, 00:20:02.258 "peer_address": { 00:20:02.258 "trtype": "TCP", 00:20:02.258 "adrfam": "IPv4", 00:20:02.258 "traddr": "10.0.0.1", 00:20:02.258 "trsvcid": "50014" 00:20:02.258 }, 00:20:02.258 "auth": { 00:20:02.258 "state": "completed", 00:20:02.258 "digest": "sha384", 00:20:02.258 "dhgroup": "ffdhe4096" 00:20:02.258 } 00:20:02.258 } 00:20:02.258 ]' 00:20:02.258 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.258 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:02.258 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.258 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:02.258 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.258 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.258 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.258 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.516 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmY5ZTA0ODY0NDEzMWQyNzFjNWU4NmM3NjJjYzlhZjRjZWE1MDM5NzBkM2M2NmE4NwRzgw==: --dhchap-ctrl-secret DHHC-1:03:MzhhNjExNjFkNTBiYzBkZDdmNzJjZTYwMjg2MTNiMjRlYTcxNGY3MWRjZTU5NWE5MjUwNjAxNWExNjliZTc4NYPihnI=: 00:20:03.450 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.450 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.450 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.450 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.451 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.451 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.451 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:03.451 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:03.708 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:03.708 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.708 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:03.709 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:03.709 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:03.709 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.709 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.709 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.709 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.709 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.709 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.709 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.275 00:20:04.275 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.275 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.275 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.275 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.275 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.275 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.275 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.275 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.533 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.533 { 00:20:04.533 "cntlid": 75, 00:20:04.533 "qid": 0, 00:20:04.533 "state": "enabled", 00:20:04.533 "thread": "nvmf_tgt_poll_group_000", 00:20:04.533 "listen_address": { 00:20:04.533 "trtype": "TCP", 00:20:04.533 "adrfam": "IPv4", 00:20:04.533 "traddr": "10.0.0.2", 00:20:04.533 "trsvcid": "4420" 00:20:04.533 }, 00:20:04.533 "peer_address": { 00:20:04.533 "trtype": "TCP", 00:20:04.533 "adrfam": "IPv4", 00:20:04.533 "traddr": "10.0.0.1", 00:20:04.533 "trsvcid": "50054" 00:20:04.533 }, 00:20:04.533 "auth": { 00:20:04.533 "state": "completed", 00:20:04.533 "digest": "sha384", 00:20:04.533 "dhgroup": "ffdhe4096" 00:20:04.533 } 00:20:04.533 } 00:20:04.533 ]' 00:20:04.533 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.533 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.533 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.533 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:04.533 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.533 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.533 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.533 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.791 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDU4NTQxZTZiZDcxYjg4YjI4OWUyMTNhYmQ3NGIxM2OP7Tpp: --dhchap-ctrl-secret DHHC-1:02:OWY5ZjYyYjgxYWU1MGMzMzFhMTFkM2EwMGFlMmNiZWIxOGQ3MWNjMzM4ZDlmNDY1BHm2Aw==: 00:20:05.727 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.727 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.727 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.727 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.727 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.727 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.727 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:05.727 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:05.985 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:05.985 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.985 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:05.985 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:05.985 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:05.985 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.985 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.985 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.985 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.985 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.985 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.985 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.243 00:20:06.243 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.243 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.243 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.501 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.501 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.501 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.501 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.501 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.501 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.501 { 00:20:06.501 "cntlid": 77, 00:20:06.501 "qid": 0, 00:20:06.501 "state": "enabled", 00:20:06.501 "thread": "nvmf_tgt_poll_group_000", 00:20:06.501 "listen_address": { 00:20:06.501 "trtype": "TCP", 00:20:06.501 "adrfam": "IPv4", 00:20:06.501 "traddr": "10.0.0.2", 00:20:06.501 "trsvcid": "4420" 00:20:06.501 }, 00:20:06.501 "peer_address": { 00:20:06.501 "trtype": "TCP", 00:20:06.501 "adrfam": "IPv4", 00:20:06.501 "traddr": "10.0.0.1", 00:20:06.501 "trsvcid": "50078" 00:20:06.501 }, 00:20:06.501 "auth": { 00:20:06.501 "state": "completed", 00:20:06.501 "digest": "sha384", 00:20:06.501 "dhgroup": "ffdhe4096" 00:20:06.501 } 00:20:06.501 } 00:20:06.501 ]' 00:20:06.501 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.501 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.501 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.759 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:06.759 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.759 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.759 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.759 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.017 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzQ5ZTA3YWMyMTFiMGZjN2VhY2RkMWI0ZTllOWE0YTIwNjdkNTQxZTMyN2RjY2ZhPE/SUA==: --dhchap-ctrl-secret DHHC-1:01:YTU3MjhjYjQ1ZmQ1Y2MzNjFiNzMyOTIyM2Q2YmJiNmJ2uRg5: 00:20:07.954 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.954 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.954 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.954 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.954 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.954 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.954 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:07.954 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:08.212 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:08.212 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.212 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:08.212 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:08.212 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:08.212 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.212 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:08.212 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.212 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.212 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.212 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:08.212 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:08.470 00:20:08.470 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.470 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.470 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.729 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.729 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.729 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.729 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.729 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.729 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.729 { 00:20:08.729 "cntlid": 79, 00:20:08.729 "qid": 0, 00:20:08.729 "state": "enabled", 00:20:08.729 "thread": "nvmf_tgt_poll_group_000", 00:20:08.729 "listen_address": { 00:20:08.729 "trtype": "TCP", 00:20:08.729 "adrfam": "IPv4", 00:20:08.729 "traddr": "10.0.0.2", 00:20:08.729 "trsvcid": "4420" 00:20:08.729 }, 00:20:08.729 "peer_address": { 00:20:08.729 "trtype": "TCP", 00:20:08.729 "adrfam": "IPv4", 00:20:08.729 "traddr": "10.0.0.1", 00:20:08.729 "trsvcid": "36538" 00:20:08.729 }, 00:20:08.729 "auth": { 00:20:08.729 "state": "completed", 00:20:08.729 "digest": "sha384", 00:20:08.729 "dhgroup": "ffdhe4096" 00:20:08.729 } 00:20:08.729 } 00:20:08.729 ]' 00:20:08.729 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.987 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.987 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.987 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:08.987 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.987 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.987 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.987 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.246 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yjc1YTMwMDQ0MWY3ODA4MjE1NTZkYmM4ZTNmNmY5NjA3NGNmYjhhYzQ4ODc4N2VhMDY4Y2Q3MGQ0MzM3Yjc5N7wWLoc=: 00:20:10.181 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.181 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.181 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.181 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.181 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.181 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:10.181 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:10.181 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:10.181 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:10.439 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:10.439 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.439 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:10.439 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:10.439 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:10.439 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.439 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.439 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.439 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.439 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.439 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.439 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.005 00:20:11.005 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.005 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.005 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.263 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.263 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.263 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.263 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.263 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.264 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.264 { 00:20:11.264 "cntlid": 81, 00:20:11.264 "qid": 0, 00:20:11.264 "state": "enabled", 00:20:11.264 "thread": "nvmf_tgt_poll_group_000", 00:20:11.264 "listen_address": { 00:20:11.264 "trtype": "TCP", 00:20:11.264 "adrfam": "IPv4", 00:20:11.264 "traddr": "10.0.0.2", 00:20:11.264 "trsvcid": "4420" 00:20:11.264 }, 00:20:11.264 "peer_address": { 00:20:11.264 "trtype": "TCP", 00:20:11.264 "adrfam": "IPv4", 00:20:11.264 "traddr": "10.0.0.1", 00:20:11.264 "trsvcid": "36566" 00:20:11.264 }, 00:20:11.264 "auth": { 00:20:11.264 "state": "completed", 00:20:11.264 "digest": "sha384", 00:20:11.264 "dhgroup": "ffdhe6144" 00:20:11.264 } 00:20:11.264 } 00:20:11.264 ]' 00:20:11.264 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.264 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:11.264 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.264 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:11.264 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.264 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.264 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.264 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.522 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmY5ZTA0ODY0NDEzMWQyNzFjNWU4NmM3NjJjYzlhZjRjZWE1MDM5NzBkM2M2NmE4NwRzgw==: --dhchap-ctrl-secret DHHC-1:03:MzhhNjExNjFkNTBiYzBkZDdmNzJjZTYwMjg2MTNiMjRlYTcxNGY3MWRjZTU5NWE5MjUwNjAxNWExNjliZTc4NYPihnI=: 00:20:12.452 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.452 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.452 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.452 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.452 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.452 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.452 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:12.452 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:12.709 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:12.709 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.709 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:12.709 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:12.709 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:12.709 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.709 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.709 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.709 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.709 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.709 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.709 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.274 00:20:13.274 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.274 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.274 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.531 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.531 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.531 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.531 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.531 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.531 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.532 { 00:20:13.532 "cntlid": 83, 00:20:13.532 "qid": 0, 00:20:13.532 "state": "enabled", 00:20:13.532 "thread": "nvmf_tgt_poll_group_000", 00:20:13.532 "listen_address": { 00:20:13.532 "trtype": "TCP", 00:20:13.532 "adrfam": "IPv4", 00:20:13.532 "traddr": "10.0.0.2", 00:20:13.532 "trsvcid": "4420" 00:20:13.532 }, 00:20:13.532 "peer_address": { 00:20:13.532 "trtype": "TCP", 00:20:13.532 "adrfam": "IPv4", 00:20:13.532 "traddr": "10.0.0.1", 00:20:13.532 "trsvcid": "36592" 00:20:13.532 }, 00:20:13.532 "auth": { 00:20:13.532 "state": "completed", 00:20:13.532 "digest": "sha384", 00:20:13.532 "dhgroup": "ffdhe6144" 00:20:13.532 } 00:20:13.532 } 00:20:13.532 ]' 00:20:13.532 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.532 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.532 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.789 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:13.789 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.789 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.789 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.789 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.047 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDU4NTQxZTZiZDcxYjg4YjI4OWUyMTNhYmQ3NGIxM2OP7Tpp: --dhchap-ctrl-secret DHHC-1:02:OWY5ZjYyYjgxYWU1MGMzMzFhMTFkM2EwMGFlMmNiZWIxOGQ3MWNjMzM4ZDlmNDY1BHm2Aw==: 00:20:14.980 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.980 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.980 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.980 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.980 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.981 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.981 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:14.981 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:15.239 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:15.239 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.239 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:15.239 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:15.239 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:15.239 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.239 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.239 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.239 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.239 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.239 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.239 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.806 00:20:15.806 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:15.806 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:15.806 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.063 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.063 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.063 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.063 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.063 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.063 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.063 { 00:20:16.063 "cntlid": 85, 00:20:16.063 "qid": 0, 00:20:16.063 "state": "enabled", 00:20:16.063 "thread": "nvmf_tgt_poll_group_000", 00:20:16.063 "listen_address": { 00:20:16.063 "trtype": "TCP", 00:20:16.063 "adrfam": "IPv4", 00:20:16.063 "traddr": "10.0.0.2", 00:20:16.063 "trsvcid": "4420" 00:20:16.063 }, 00:20:16.063 "peer_address": { 00:20:16.063 "trtype": "TCP", 00:20:16.063 "adrfam": "IPv4", 00:20:16.063 "traddr": "10.0.0.1", 00:20:16.063 "trsvcid": "36620" 00:20:16.063 }, 00:20:16.063 "auth": { 00:20:16.063 "state": "completed", 00:20:16.063 "digest": "sha384", 00:20:16.063 "dhgroup": "ffdhe6144" 00:20:16.063 } 00:20:16.063 } 00:20:16.063 ]' 00:20:16.063 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.064 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.064 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.064 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:16.064 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.321 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.321 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.321 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.579 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzQ5ZTA3YWMyMTFiMGZjN2VhY2RkMWI0ZTllOWE0YTIwNjdkNTQxZTMyN2RjY2ZhPE/SUA==: --dhchap-ctrl-secret DHHC-1:01:YTU3MjhjYjQ1ZmQ1Y2MzNjFiNzMyOTIyM2Q2YmJiNmJ2uRg5: 00:20:17.515 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.515 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:17.515 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.515 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.515 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.515 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.515 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:17.515 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:17.772 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:17.772 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.772 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:17.772 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:17.772 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:17.772 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.772 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:17.772 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.772 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.772 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.772 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:17.772 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:18.339 00:20:18.339 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.339 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.339 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.598 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.598 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.598 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.598 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.598 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.598 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.598 { 00:20:18.598 "cntlid": 87, 00:20:18.598 "qid": 0, 00:20:18.598 "state": "enabled", 00:20:18.598 "thread": "nvmf_tgt_poll_group_000", 00:20:18.598 "listen_address": { 00:20:18.598 "trtype": "TCP", 00:20:18.598 "adrfam": "IPv4", 00:20:18.598 "traddr": "10.0.0.2", 00:20:18.598 "trsvcid": "4420" 00:20:18.598 }, 00:20:18.598 "peer_address": { 00:20:18.598 "trtype": "TCP", 00:20:18.598 "adrfam": "IPv4", 00:20:18.598 "traddr": "10.0.0.1", 00:20:18.598 "trsvcid": "36650" 00:20:18.598 }, 00:20:18.598 "auth": { 00:20:18.598 "state": "completed", 00:20:18.598 "digest": "sha384", 00:20:18.598 "dhgroup": "ffdhe6144" 00:20:18.598 } 00:20:18.598 } 00:20:18.598 ]' 00:20:18.598 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.598 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.598 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.598 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:18.598 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.598 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.598 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.598 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.858 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yjc1YTMwMDQ0MWY3ODA4MjE1NTZkYmM4ZTNmNmY5NjA3NGNmYjhhYzQ4ODc4N2VhMDY4Y2Q3MGQ0MzM3Yjc5N7wWLoc=: 00:20:20.232 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.232 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.232 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.232 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.232 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.232 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.232 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.232 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:20.232 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:20.232 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:20.232 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.232 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:20.232 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:20.232 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:20.232 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.232 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.232 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.232 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.232 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.232 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.232 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.169 00:20:21.169 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.169 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.169 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.427 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.427 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.427 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.427 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.427 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.427 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.427 { 00:20:21.427 "cntlid": 89, 00:20:21.427 "qid": 0, 00:20:21.427 "state": "enabled", 00:20:21.427 "thread": "nvmf_tgt_poll_group_000", 00:20:21.427 "listen_address": { 00:20:21.427 "trtype": "TCP", 00:20:21.427 "adrfam": "IPv4", 00:20:21.427 "traddr": "10.0.0.2", 00:20:21.427 "trsvcid": "4420" 00:20:21.427 }, 00:20:21.427 "peer_address": { 00:20:21.427 "trtype": "TCP", 00:20:21.427 "adrfam": "IPv4", 00:20:21.427 "traddr": "10.0.0.1", 00:20:21.427 "trsvcid": "47188" 00:20:21.427 }, 00:20:21.427 "auth": { 00:20:21.427 "state": "completed", 00:20:21.427 "digest": "sha384", 00:20:21.427 "dhgroup": "ffdhe8192" 00:20:21.427 } 00:20:21.427 } 00:20:21.427 ]' 00:20:21.427 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.427 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.427 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.427 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:21.427 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.427 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.427 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.427 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.686 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmY5ZTA0ODY0NDEzMWQyNzFjNWU4NmM3NjJjYzlhZjRjZWE1MDM5NzBkM2M2NmE4NwRzgw==: --dhchap-ctrl-secret DHHC-1:03:MzhhNjExNjFkNTBiYzBkZDdmNzJjZTYwMjg2MTNiMjRlYTcxNGY3MWRjZTU5NWE5MjUwNjAxNWExNjliZTc4NYPihnI=: 00:20:22.621 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.621 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.621 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.621 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.621 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.621 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.621 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:22.621 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:22.879 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:22.879 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.879 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:22.879 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:22.879 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:22.879 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.879 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.879 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.879 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.879 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.879 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.879 06:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.818 00:20:23.818 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.818 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.818 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.076 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.076 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.076 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.076 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.076 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.076 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.076 { 00:20:24.076 "cntlid": 91, 00:20:24.076 "qid": 0, 00:20:24.076 "state": "enabled", 00:20:24.076 "thread": "nvmf_tgt_poll_group_000", 00:20:24.076 "listen_address": { 00:20:24.076 "trtype": "TCP", 00:20:24.076 "adrfam": "IPv4", 00:20:24.076 "traddr": "10.0.0.2", 00:20:24.076 "trsvcid": "4420" 00:20:24.076 }, 00:20:24.076 "peer_address": { 00:20:24.076 "trtype": "TCP", 00:20:24.076 "adrfam": "IPv4", 00:20:24.076 "traddr": "10.0.0.1", 00:20:24.076 "trsvcid": "47212" 00:20:24.076 }, 00:20:24.076 "auth": { 00:20:24.076 "state": "completed", 00:20:24.076 "digest": "sha384", 00:20:24.076 "dhgroup": "ffdhe8192" 00:20:24.076 } 00:20:24.076 } 00:20:24.076 ]' 00:20:24.076 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.076 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.076 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.077 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:24.077 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.335 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.335 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.335 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.594 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDU4NTQxZTZiZDcxYjg4YjI4OWUyMTNhYmQ3NGIxM2OP7Tpp: --dhchap-ctrl-secret DHHC-1:02:OWY5ZjYyYjgxYWU1MGMzMzFhMTFkM2EwMGFlMmNiZWIxOGQ3MWNjMzM4ZDlmNDY1BHm2Aw==: 00:20:25.530 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.530 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.530 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.530 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.530 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.530 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.530 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:25.530 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:25.788 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:25.788 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.788 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:25.788 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:25.788 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:25.788 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.788 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.788 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.788 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.788 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.788 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.788 06:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.724 00:20:26.724 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:26.724 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:26.724 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.982 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.982 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.982 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.982 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.982 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.982 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.982 { 00:20:26.982 "cntlid": 93, 00:20:26.982 "qid": 0, 00:20:26.982 "state": "enabled", 00:20:26.982 "thread": "nvmf_tgt_poll_group_000", 00:20:26.982 "listen_address": { 00:20:26.982 "trtype": "TCP", 00:20:26.982 "adrfam": "IPv4", 00:20:26.982 "traddr": "10.0.0.2", 00:20:26.982 "trsvcid": "4420" 00:20:26.982 }, 00:20:26.982 "peer_address": { 00:20:26.982 "trtype": "TCP", 00:20:26.982 "adrfam": "IPv4", 00:20:26.982 "traddr": "10.0.0.1", 00:20:26.982 "trsvcid": "47246" 00:20:26.982 }, 00:20:26.982 "auth": { 00:20:26.982 "state": "completed", 00:20:26.982 "digest": "sha384", 00:20:26.982 "dhgroup": "ffdhe8192" 00:20:26.982 } 00:20:26.982 } 00:20:26.982 ]' 00:20:26.982 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.982 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.982 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.982 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:26.982 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.982 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.982 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.982 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.242 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzQ5ZTA3YWMyMTFiMGZjN2VhY2RkMWI0ZTllOWE0YTIwNjdkNTQxZTMyN2RjY2ZhPE/SUA==: --dhchap-ctrl-secret DHHC-1:01:YTU3MjhjYjQ1ZmQ1Y2MzNjFiNzMyOTIyM2Q2YmJiNmJ2uRg5: 00:20:28.178 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.436 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.436 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.436 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.436 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.436 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.436 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:28.436 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:28.695 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:28.695 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:28.695 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:28.695 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:28.695 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:28.695 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.695 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:28.695 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.695 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.695 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.695 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:28.695 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.628 00:20:29.628 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.628 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.628 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.886 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.886 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.886 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.886 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.886 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.886 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.886 { 00:20:29.886 "cntlid": 95, 00:20:29.886 "qid": 0, 00:20:29.886 "state": "enabled", 00:20:29.886 "thread": "nvmf_tgt_poll_group_000", 00:20:29.886 "listen_address": { 00:20:29.886 "trtype": "TCP", 00:20:29.886 "adrfam": "IPv4", 00:20:29.886 "traddr": "10.0.0.2", 00:20:29.886 "trsvcid": "4420" 00:20:29.886 }, 00:20:29.886 "peer_address": { 00:20:29.886 "trtype": "TCP", 00:20:29.886 "adrfam": "IPv4", 00:20:29.886 "traddr": "10.0.0.1", 00:20:29.886 "trsvcid": "35020" 00:20:29.886 }, 00:20:29.886 "auth": { 00:20:29.886 "state": "completed", 00:20:29.886 "digest": "sha384", 00:20:29.886 "dhgroup": "ffdhe8192" 00:20:29.886 } 00:20:29.886 } 00:20:29.886 ]' 00:20:29.886 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.886 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.886 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.886 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:29.886 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.886 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.886 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.886 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.145 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yjc1YTMwMDQ0MWY3ODA4MjE1NTZkYmM4ZTNmNmY5NjA3NGNmYjhhYzQ4ODc4N2VhMDY4Y2Q3MGQ0MzM3Yjc5N7wWLoc=: 00:20:31.112 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.112 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.112 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.112 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.112 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.112 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:31.112 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.112 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.112 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:31.112 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:31.370 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:20:31.370 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.370 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:31.370 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:31.370 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:31.370 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.370 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.370 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.370 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.370 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.370 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.370 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.630 00:20:31.889 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.889 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.889 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.889 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.889 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.889 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.889 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.160 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.160 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.160 { 00:20:32.160 "cntlid": 97, 00:20:32.160 "qid": 0, 00:20:32.160 "state": "enabled", 00:20:32.160 "thread": "nvmf_tgt_poll_group_000", 00:20:32.160 "listen_address": { 00:20:32.160 "trtype": "TCP", 00:20:32.160 "adrfam": "IPv4", 00:20:32.160 "traddr": "10.0.0.2", 00:20:32.160 "trsvcid": "4420" 00:20:32.160 }, 00:20:32.160 "peer_address": { 00:20:32.160 "trtype": "TCP", 00:20:32.160 "adrfam": "IPv4", 00:20:32.160 "traddr": "10.0.0.1", 00:20:32.160 "trsvcid": "35042" 00:20:32.160 }, 00:20:32.160 "auth": { 00:20:32.160 "state": "completed", 00:20:32.160 "digest": "sha512", 00:20:32.160 "dhgroup": "null" 00:20:32.160 } 00:20:32.160 } 00:20:32.160 ]' 00:20:32.160 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.160 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:32.160 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.160 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:32.160 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.160 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.160 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.160 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.422 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmY5ZTA0ODY0NDEzMWQyNzFjNWU4NmM3NjJjYzlhZjRjZWE1MDM5NzBkM2M2NmE4NwRzgw==: --dhchap-ctrl-secret DHHC-1:03:MzhhNjExNjFkNTBiYzBkZDdmNzJjZTYwMjg2MTNiMjRlYTcxNGY3MWRjZTU5NWE5MjUwNjAxNWExNjliZTc4NYPihnI=: 00:20:33.354 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.354 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.354 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.354 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.354 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.354 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.354 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:33.354 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:33.612 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:20:33.612 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.612 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:33.612 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:33.612 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:33.612 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.612 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.612 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.612 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.612 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.612 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.612 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.870 00:20:33.870 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:33.870 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:33.870 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.128 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.128 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.128 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.128 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.128 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.128 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.128 { 00:20:34.128 "cntlid": 99, 00:20:34.128 "qid": 0, 00:20:34.128 "state": "enabled", 00:20:34.128 "thread": "nvmf_tgt_poll_group_000", 00:20:34.128 "listen_address": { 00:20:34.128 "trtype": "TCP", 00:20:34.128 "adrfam": "IPv4", 00:20:34.128 "traddr": "10.0.0.2", 00:20:34.128 "trsvcid": "4420" 00:20:34.128 }, 00:20:34.128 "peer_address": { 00:20:34.128 "trtype": "TCP", 00:20:34.128 "adrfam": "IPv4", 00:20:34.128 "traddr": "10.0.0.1", 00:20:34.128 "trsvcid": "35060" 00:20:34.128 }, 00:20:34.128 "auth": { 00:20:34.128 "state": "completed", 00:20:34.128 "digest": "sha512", 00:20:34.128 "dhgroup": "null" 00:20:34.128 } 00:20:34.128 } 00:20:34.128 ]' 00:20:34.128 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.386 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:34.386 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.386 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:34.386 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.386 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.386 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.386 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.644 06:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDU4NTQxZTZiZDcxYjg4YjI4OWUyMTNhYmQ3NGIxM2OP7Tpp: --dhchap-ctrl-secret DHHC-1:02:OWY5ZjYyYjgxYWU1MGMzMzFhMTFkM2EwMGFlMmNiZWIxOGQ3MWNjMzM4ZDlmNDY1BHm2Aw==: 00:20:35.581 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.581 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.581 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.581 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.581 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.581 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.581 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:35.581 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:36.147 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:20:36.147 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.147 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:36.147 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:36.147 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:36.147 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.147 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.147 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.147 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.147 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.147 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.147 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.405 00:20:36.405 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.405 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.405 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.663 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.663 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.663 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.663 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.663 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.663 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.663 { 00:20:36.663 "cntlid": 101, 00:20:36.663 "qid": 0, 00:20:36.663 "state": "enabled", 00:20:36.663 "thread": "nvmf_tgt_poll_group_000", 00:20:36.663 "listen_address": { 00:20:36.663 "trtype": "TCP", 00:20:36.663 "adrfam": "IPv4", 00:20:36.663 "traddr": "10.0.0.2", 00:20:36.663 "trsvcid": "4420" 00:20:36.663 }, 00:20:36.663 "peer_address": { 00:20:36.663 "trtype": "TCP", 00:20:36.663 "adrfam": "IPv4", 00:20:36.663 "traddr": "10.0.0.1", 00:20:36.663 "trsvcid": "35082" 00:20:36.663 }, 00:20:36.663 "auth": { 00:20:36.663 "state": "completed", 00:20:36.663 "digest": "sha512", 00:20:36.663 "dhgroup": "null" 00:20:36.663 } 00:20:36.663 } 00:20:36.663 ]' 00:20:36.663 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.663 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:36.663 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.663 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:36.663 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.663 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.663 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.663 06:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.921 06:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzQ5ZTA3YWMyMTFiMGZjN2VhY2RkMWI0ZTllOWE0YTIwNjdkNTQxZTMyN2RjY2ZhPE/SUA==: --dhchap-ctrl-secret DHHC-1:01:YTU3MjhjYjQ1ZmQ1Y2MzNjFiNzMyOTIyM2Q2YmJiNmJ2uRg5: 00:20:37.857 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.857 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.857 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.857 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.857 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.857 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.857 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:37.857 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:38.117 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:20:38.117 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.117 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:38.117 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:38.117 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:38.117 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.117 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:38.117 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.117 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.376 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.376 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:38.377 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:38.635 00:20:38.635 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.635 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.635 06:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.893 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.893 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.893 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.893 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.893 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.893 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.893 { 00:20:38.893 "cntlid": 103, 00:20:38.893 "qid": 0, 00:20:38.893 "state": "enabled", 00:20:38.893 "thread": "nvmf_tgt_poll_group_000", 00:20:38.893 "listen_address": { 00:20:38.893 "trtype": "TCP", 00:20:38.893 "adrfam": "IPv4", 00:20:38.893 "traddr": "10.0.0.2", 00:20:38.893 "trsvcid": "4420" 00:20:38.893 }, 00:20:38.893 "peer_address": { 00:20:38.893 "trtype": "TCP", 00:20:38.893 "adrfam": "IPv4", 00:20:38.893 "traddr": "10.0.0.1", 00:20:38.893 "trsvcid": "37422" 00:20:38.893 }, 00:20:38.893 "auth": { 00:20:38.893 "state": "completed", 00:20:38.893 "digest": "sha512", 00:20:38.893 "dhgroup": "null" 00:20:38.893 } 00:20:38.893 } 00:20:38.893 ]' 00:20:38.893 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.893 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:38.893 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.893 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:38.893 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.893 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.893 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.893 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.152 06:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yjc1YTMwMDQ0MWY3ODA4MjE1NTZkYmM4ZTNmNmY5NjA3NGNmYjhhYzQ4ODc4N2VhMDY4Y2Q3MGQ0MzM3Yjc5N7wWLoc=: 00:20:40.529 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.529 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.529 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.529 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.529 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.529 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.529 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.529 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:40.529 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:40.529 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:20:40.529 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.529 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:40.529 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:40.529 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:40.529 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.529 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.529 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.529 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.529 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.529 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.529 06:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.787 00:20:40.787 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.787 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.787 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.044 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.044 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.044 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.044 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.044 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.044 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.044 { 00:20:41.044 "cntlid": 105, 00:20:41.044 "qid": 0, 00:20:41.044 "state": "enabled", 00:20:41.044 "thread": "nvmf_tgt_poll_group_000", 00:20:41.044 "listen_address": { 00:20:41.044 "trtype": "TCP", 00:20:41.044 "adrfam": "IPv4", 00:20:41.044 "traddr": "10.0.0.2", 00:20:41.044 "trsvcid": "4420" 00:20:41.044 }, 00:20:41.044 "peer_address": { 00:20:41.044 "trtype": "TCP", 00:20:41.044 "adrfam": "IPv4", 00:20:41.044 "traddr": "10.0.0.1", 00:20:41.044 "trsvcid": "37442" 00:20:41.044 }, 00:20:41.044 "auth": { 00:20:41.044 "state": "completed", 00:20:41.044 "digest": "sha512", 00:20:41.044 "dhgroup": "ffdhe2048" 00:20:41.044 } 00:20:41.044 } 00:20:41.044 ]' 00:20:41.044 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.302 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.302 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.302 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:41.302 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.302 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.302 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.302 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.560 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmY5ZTA0ODY0NDEzMWQyNzFjNWU4NmM3NjJjYzlhZjRjZWE1MDM5NzBkM2M2NmE4NwRzgw==: --dhchap-ctrl-secret DHHC-1:03:MzhhNjExNjFkNTBiYzBkZDdmNzJjZTYwMjg2MTNiMjRlYTcxNGY3MWRjZTU5NWE5MjUwNjAxNWExNjliZTc4NYPihnI=: 00:20:42.494 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.494 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.494 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.494 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.494 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.494 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.494 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:42.494 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:42.753 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:42.753 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.753 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:42.753 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:42.753 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:42.753 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.753 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.753 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.753 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.754 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.754 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.754 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.012 00:20:43.012 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.012 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.012 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.269 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.269 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.269 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.269 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.269 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.269 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.269 { 00:20:43.269 "cntlid": 107, 00:20:43.269 "qid": 0, 00:20:43.269 "state": "enabled", 00:20:43.269 "thread": "nvmf_tgt_poll_group_000", 00:20:43.269 "listen_address": { 00:20:43.269 "trtype": "TCP", 00:20:43.269 "adrfam": "IPv4", 00:20:43.269 "traddr": "10.0.0.2", 00:20:43.269 "trsvcid": "4420" 00:20:43.269 }, 00:20:43.269 "peer_address": { 00:20:43.269 "trtype": "TCP", 00:20:43.269 "adrfam": "IPv4", 00:20:43.269 "traddr": "10.0.0.1", 00:20:43.269 "trsvcid": "37466" 00:20:43.269 }, 00:20:43.269 "auth": { 00:20:43.269 "state": "completed", 00:20:43.269 "digest": "sha512", 00:20:43.269 "dhgroup": "ffdhe2048" 00:20:43.269 } 00:20:43.269 } 00:20:43.269 ]' 00:20:43.269 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.526 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:43.526 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.526 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:43.527 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.527 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.527 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.527 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.784 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDU4NTQxZTZiZDcxYjg4YjI4OWUyMTNhYmQ3NGIxM2OP7Tpp: --dhchap-ctrl-secret DHHC-1:02:OWY5ZjYyYjgxYWU1MGMzMzFhMTFkM2EwMGFlMmNiZWIxOGQ3MWNjMzM4ZDlmNDY1BHm2Aw==: 00:20:44.719 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.719 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.719 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.719 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.719 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.719 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.719 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:44.719 06:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:44.977 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:20:44.977 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.977 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:44.977 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:44.977 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:44.977 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.977 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.977 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.977 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.977 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.978 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.978 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.235 00:20:45.235 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.235 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.235 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.499 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.499 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.499 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.499 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.499 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.499 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.499 { 00:20:45.499 "cntlid": 109, 00:20:45.499 "qid": 0, 00:20:45.499 "state": "enabled", 00:20:45.499 "thread": "nvmf_tgt_poll_group_000", 00:20:45.499 "listen_address": { 00:20:45.499 "trtype": "TCP", 00:20:45.499 "adrfam": "IPv4", 00:20:45.499 "traddr": "10.0.0.2", 00:20:45.499 "trsvcid": "4420" 00:20:45.499 }, 00:20:45.499 "peer_address": { 00:20:45.499 "trtype": "TCP", 00:20:45.499 "adrfam": "IPv4", 00:20:45.499 "traddr": "10.0.0.1", 00:20:45.499 "trsvcid": "37484" 00:20:45.499 }, 00:20:45.499 "auth": { 00:20:45.499 "state": "completed", 00:20:45.499 "digest": "sha512", 00:20:45.499 "dhgroup": "ffdhe2048" 00:20:45.499 } 00:20:45.499 } 00:20:45.499 ]' 00:20:45.499 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.499 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:45.499 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.499 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:45.499 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.795 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.795 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.795 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.795 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzQ5ZTA3YWMyMTFiMGZjN2VhY2RkMWI0ZTllOWE0YTIwNjdkNTQxZTMyN2RjY2ZhPE/SUA==: --dhchap-ctrl-secret DHHC-1:01:YTU3MjhjYjQ1ZmQ1Y2MzNjFiNzMyOTIyM2Q2YmJiNmJ2uRg5: 00:20:46.736 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.736 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.736 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.736 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.736 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.736 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.736 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:46.736 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:46.993 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:20:46.993 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.993 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:46.993 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:46.993 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:46.993 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.993 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:46.993 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.993 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.993 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.993 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:46.993 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:47.559 00:20:47.559 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.559 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.559 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.559 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.559 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.559 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.559 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.559 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.559 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.559 { 00:20:47.559 "cntlid": 111, 00:20:47.559 "qid": 0, 00:20:47.559 "state": "enabled", 00:20:47.559 "thread": "nvmf_tgt_poll_group_000", 00:20:47.559 "listen_address": { 00:20:47.559 "trtype": "TCP", 00:20:47.559 "adrfam": "IPv4", 00:20:47.559 "traddr": "10.0.0.2", 00:20:47.559 "trsvcid": "4420" 00:20:47.559 }, 00:20:47.559 "peer_address": { 00:20:47.559 "trtype": "TCP", 00:20:47.559 "adrfam": "IPv4", 00:20:47.559 "traddr": "10.0.0.1", 00:20:47.559 "trsvcid": "37502" 00:20:47.559 }, 00:20:47.559 "auth": { 00:20:47.559 "state": "completed", 00:20:47.559 "digest": "sha512", 00:20:47.559 "dhgroup": "ffdhe2048" 00:20:47.559 } 00:20:47.559 } 00:20:47.559 ]' 00:20:47.559 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.818 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:47.818 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.818 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:47.818 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.818 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.818 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.818 06:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.078 06:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yjc1YTMwMDQ0MWY3ODA4MjE1NTZkYmM4ZTNmNmY5NjA3NGNmYjhhYzQ4ODc4N2VhMDY4Y2Q3MGQ0MzM3Yjc5N7wWLoc=: 00:20:49.012 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.012 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.012 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.012 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.012 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.012 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.012 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.013 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:49.013 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:49.270 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:20:49.270 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.270 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:49.270 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:49.270 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:49.270 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.270 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.270 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.270 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.270 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.270 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.270 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.527 00:20:49.527 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.527 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.527 06:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.784 06:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.784 06:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.784 06:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.784 06:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.784 06:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.784 06:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.784 { 00:20:49.784 "cntlid": 113, 00:20:49.784 "qid": 0, 00:20:49.784 "state": "enabled", 00:20:49.784 "thread": "nvmf_tgt_poll_group_000", 00:20:49.784 "listen_address": { 00:20:49.784 "trtype": "TCP", 00:20:49.784 "adrfam": "IPv4", 00:20:49.784 "traddr": "10.0.0.2", 00:20:49.784 "trsvcid": "4420" 00:20:49.784 }, 00:20:49.784 "peer_address": { 00:20:49.784 "trtype": "TCP", 00:20:49.784 "adrfam": "IPv4", 00:20:49.784 "traddr": "10.0.0.1", 00:20:49.784 "trsvcid": "55580" 00:20:49.784 }, 00:20:49.784 "auth": { 00:20:49.784 "state": "completed", 00:20:49.784 "digest": "sha512", 00:20:49.784 "dhgroup": "ffdhe3072" 00:20:49.784 } 00:20:49.784 } 00:20:49.784 ]' 00:20:49.784 06:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.042 06:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:50.042 06:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.042 06:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:50.042 06:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.042 06:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.042 06:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.042 06:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.299 06:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmY5ZTA0ODY0NDEzMWQyNzFjNWU4NmM3NjJjYzlhZjRjZWE1MDM5NzBkM2M2NmE4NwRzgw==: --dhchap-ctrl-secret DHHC-1:03:MzhhNjExNjFkNTBiYzBkZDdmNzJjZTYwMjg2MTNiMjRlYTcxNGY3MWRjZTU5NWE5MjUwNjAxNWExNjliZTc4NYPihnI=: 00:20:51.231 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.231 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.231 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.231 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.231 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.231 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.231 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:51.231 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:51.490 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:20:51.490 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.490 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:51.490 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:51.490 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:51.490 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.490 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.490 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.490 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.490 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.490 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.490 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.748 00:20:51.748 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.748 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.748 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.007 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.007 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.007 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.007 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.007 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.007 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.007 { 00:20:52.007 "cntlid": 115, 00:20:52.007 "qid": 0, 00:20:52.007 "state": "enabled", 00:20:52.007 "thread": "nvmf_tgt_poll_group_000", 00:20:52.007 "listen_address": { 00:20:52.007 "trtype": "TCP", 00:20:52.007 "adrfam": "IPv4", 00:20:52.007 "traddr": "10.0.0.2", 00:20:52.007 "trsvcid": "4420" 00:20:52.007 }, 00:20:52.007 "peer_address": { 00:20:52.007 "trtype": "TCP", 00:20:52.007 "adrfam": "IPv4", 00:20:52.007 "traddr": "10.0.0.1", 00:20:52.007 "trsvcid": "55608" 00:20:52.007 }, 00:20:52.007 "auth": { 00:20:52.007 "state": "completed", 00:20:52.007 "digest": "sha512", 00:20:52.007 "dhgroup": "ffdhe3072" 00:20:52.007 } 00:20:52.007 } 00:20:52.007 ]' 00:20:52.007 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.007 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:52.007 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.007 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:52.264 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.264 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.264 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.264 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.521 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDU4NTQxZTZiZDcxYjg4YjI4OWUyMTNhYmQ3NGIxM2OP7Tpp: --dhchap-ctrl-secret DHHC-1:02:OWY5ZjYyYjgxYWU1MGMzMzFhMTFkM2EwMGFlMmNiZWIxOGQ3MWNjMzM4ZDlmNDY1BHm2Aw==: 00:20:53.454 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.454 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.454 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.454 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.454 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.454 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.454 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:53.454 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:53.713 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:53.713 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.713 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:53.713 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:53.713 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:53.713 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.713 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.713 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.713 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.713 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.713 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.713 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.971 00:20:53.971 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.971 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.971 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.229 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.229 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.229 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.229 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.229 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.229 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.229 { 00:20:54.229 "cntlid": 117, 00:20:54.229 "qid": 0, 00:20:54.229 "state": "enabled", 00:20:54.229 "thread": "nvmf_tgt_poll_group_000", 00:20:54.229 "listen_address": { 00:20:54.229 "trtype": "TCP", 00:20:54.229 "adrfam": "IPv4", 00:20:54.229 "traddr": "10.0.0.2", 00:20:54.229 "trsvcid": "4420" 00:20:54.229 }, 00:20:54.229 "peer_address": { 00:20:54.229 "trtype": "TCP", 00:20:54.229 "adrfam": "IPv4", 00:20:54.229 "traddr": "10.0.0.1", 00:20:54.229 "trsvcid": "55620" 00:20:54.229 }, 00:20:54.229 "auth": { 00:20:54.229 "state": "completed", 00:20:54.229 "digest": "sha512", 00:20:54.229 "dhgroup": "ffdhe3072" 00:20:54.229 } 00:20:54.229 } 00:20:54.229 ]' 00:20:54.229 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.487 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:54.487 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.487 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:54.487 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.487 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.487 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.487 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.745 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzQ5ZTA3YWMyMTFiMGZjN2VhY2RkMWI0ZTllOWE0YTIwNjdkNTQxZTMyN2RjY2ZhPE/SUA==: --dhchap-ctrl-secret DHHC-1:01:YTU3MjhjYjQ1ZmQ1Y2MzNjFiNzMyOTIyM2Q2YmJiNmJ2uRg5: 00:20:55.685 06:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.685 06:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.685 06:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.685 06:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.685 06:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.685 06:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:55.685 06:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:55.685 06:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:55.944 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:55.944 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.944 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:55.944 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:55.944 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:55.944 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.944 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:55.944 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.945 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.945 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.945 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:55.945 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:56.514 00:20:56.514 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.514 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.514 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.514 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.514 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.514 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.514 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.773 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.773 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.773 { 00:20:56.773 "cntlid": 119, 00:20:56.773 "qid": 0, 00:20:56.773 "state": "enabled", 00:20:56.773 "thread": "nvmf_tgt_poll_group_000", 00:20:56.773 "listen_address": { 00:20:56.773 "trtype": "TCP", 00:20:56.773 "adrfam": "IPv4", 00:20:56.773 "traddr": "10.0.0.2", 00:20:56.773 "trsvcid": "4420" 00:20:56.773 }, 00:20:56.773 "peer_address": { 00:20:56.773 "trtype": "TCP", 00:20:56.773 "adrfam": "IPv4", 00:20:56.773 "traddr": "10.0.0.1", 00:20:56.773 "trsvcid": "55660" 00:20:56.773 }, 00:20:56.773 "auth": { 00:20:56.773 "state": "completed", 00:20:56.773 "digest": "sha512", 00:20:56.773 "dhgroup": "ffdhe3072" 00:20:56.773 } 00:20:56.773 } 00:20:56.773 ]' 00:20:56.773 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.773 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.773 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.773 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:56.773 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.773 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.773 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.773 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.031 06:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yjc1YTMwMDQ0MWY3ODA4MjE1NTZkYmM4ZTNmNmY5NjA3NGNmYjhhYzQ4ODc4N2VhMDY4Y2Q3MGQ0MzM3Yjc5N7wWLoc=: 00:20:57.969 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.969 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.969 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.970 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.970 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.970 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.970 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.970 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:57.970 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:58.228 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:20:58.228 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.228 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:58.228 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:58.228 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:58.228 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.228 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.228 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.228 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.228 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.228 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.228 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.794 00:20:58.794 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.794 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.794 06:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:59.054 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.054 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.054 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.054 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.054 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.054 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:59.054 { 00:20:59.054 "cntlid": 121, 00:20:59.054 "qid": 0, 00:20:59.054 "state": "enabled", 00:20:59.054 "thread": "nvmf_tgt_poll_group_000", 00:20:59.054 "listen_address": { 00:20:59.054 "trtype": "TCP", 00:20:59.054 "adrfam": "IPv4", 00:20:59.054 "traddr": "10.0.0.2", 00:20:59.054 "trsvcid": "4420" 00:20:59.054 }, 00:20:59.054 "peer_address": { 00:20:59.054 "trtype": "TCP", 00:20:59.054 "adrfam": "IPv4", 00:20:59.054 "traddr": "10.0.0.1", 00:20:59.054 "trsvcid": "37030" 00:20:59.054 }, 00:20:59.054 "auth": { 00:20:59.054 "state": "completed", 00:20:59.054 "digest": "sha512", 00:20:59.054 "dhgroup": "ffdhe4096" 00:20:59.054 } 00:20:59.054 } 00:20:59.054 ]' 00:20:59.054 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:59.054 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.054 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:59.054 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:59.054 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:59.054 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.054 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.054 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.313 06:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmY5ZTA0ODY0NDEzMWQyNzFjNWU4NmM3NjJjYzlhZjRjZWE1MDM5NzBkM2M2NmE4NwRzgw==: --dhchap-ctrl-secret DHHC-1:03:MzhhNjExNjFkNTBiYzBkZDdmNzJjZTYwMjg2MTNiMjRlYTcxNGY3MWRjZTU5NWE5MjUwNjAxNWExNjliZTc4NYPihnI=: 00:21:00.300 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.300 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.300 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.300 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.300 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.300 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:00.300 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:00.300 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:00.558 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:00.558 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.558 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:00.558 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:00.558 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:00.558 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.558 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.558 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.558 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.558 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.558 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.558 06:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.127 00:21:01.127 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:01.127 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:01.127 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.385 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.385 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.385 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.385 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.385 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.385 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:01.385 { 00:21:01.385 "cntlid": 123, 00:21:01.385 "qid": 0, 00:21:01.385 "state": "enabled", 00:21:01.385 "thread": "nvmf_tgt_poll_group_000", 00:21:01.385 "listen_address": { 00:21:01.385 "trtype": "TCP", 00:21:01.385 "adrfam": "IPv4", 00:21:01.385 "traddr": "10.0.0.2", 00:21:01.385 "trsvcid": "4420" 00:21:01.385 }, 00:21:01.385 "peer_address": { 00:21:01.385 "trtype": "TCP", 00:21:01.385 "adrfam": "IPv4", 00:21:01.385 "traddr": "10.0.0.1", 00:21:01.385 "trsvcid": "37050" 00:21:01.385 }, 00:21:01.385 "auth": { 00:21:01.385 "state": "completed", 00:21:01.385 "digest": "sha512", 00:21:01.385 "dhgroup": "ffdhe4096" 00:21:01.385 } 00:21:01.385 } 00:21:01.385 ]' 00:21:01.385 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:01.385 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.385 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:01.386 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:01.386 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:01.386 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.386 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.386 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.645 06:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDU4NTQxZTZiZDcxYjg4YjI4OWUyMTNhYmQ3NGIxM2OP7Tpp: --dhchap-ctrl-secret DHHC-1:02:OWY5ZjYyYjgxYWU1MGMzMzFhMTFkM2EwMGFlMmNiZWIxOGQ3MWNjMzM4ZDlmNDY1BHm2Aw==: 00:21:02.630 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.630 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.630 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.630 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.630 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.630 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.630 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:02.630 06:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:02.888 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:02.888 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.888 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:02.888 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:02.888 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:02.888 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.888 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.888 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.888 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.888 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.888 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.888 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.457 00:21:03.457 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:03.457 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:03.457 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.716 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.716 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.716 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.716 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.716 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.716 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.716 { 00:21:03.716 "cntlid": 125, 00:21:03.716 "qid": 0, 00:21:03.716 "state": "enabled", 00:21:03.716 "thread": "nvmf_tgt_poll_group_000", 00:21:03.716 "listen_address": { 00:21:03.716 "trtype": "TCP", 00:21:03.716 "adrfam": "IPv4", 00:21:03.716 "traddr": "10.0.0.2", 00:21:03.716 "trsvcid": "4420" 00:21:03.716 }, 00:21:03.716 "peer_address": { 00:21:03.716 "trtype": "TCP", 00:21:03.716 "adrfam": "IPv4", 00:21:03.716 "traddr": "10.0.0.1", 00:21:03.716 "trsvcid": "37082" 00:21:03.716 }, 00:21:03.716 "auth": { 00:21:03.716 "state": "completed", 00:21:03.716 "digest": "sha512", 00:21:03.716 "dhgroup": "ffdhe4096" 00:21:03.716 } 00:21:03.716 } 00:21:03.716 ]' 00:21:03.716 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.716 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.716 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.716 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:03.716 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.716 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.716 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.716 06:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.977 06:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzQ5ZTA3YWMyMTFiMGZjN2VhY2RkMWI0ZTllOWE0YTIwNjdkNTQxZTMyN2RjY2ZhPE/SUA==: --dhchap-ctrl-secret DHHC-1:01:YTU3MjhjYjQ1ZmQ1Y2MzNjFiNzMyOTIyM2Q2YmJiNmJ2uRg5: 00:21:04.912 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.912 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.912 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.912 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.912 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.912 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.912 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:04.912 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:05.478 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:05.478 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:05.478 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:05.478 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:05.478 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:05.478 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.478 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:05.478 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.478 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.478 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.478 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:05.478 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:05.736 00:21:05.736 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:05.736 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:05.736 06:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.994 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.994 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.994 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.994 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.994 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.994 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:05.994 { 00:21:05.994 "cntlid": 127, 00:21:05.994 "qid": 0, 00:21:05.994 "state": "enabled", 00:21:05.994 "thread": "nvmf_tgt_poll_group_000", 00:21:05.994 "listen_address": { 00:21:05.994 "trtype": "TCP", 00:21:05.994 "adrfam": "IPv4", 00:21:05.994 "traddr": "10.0.0.2", 00:21:05.994 "trsvcid": "4420" 00:21:05.994 }, 00:21:05.994 "peer_address": { 00:21:05.994 "trtype": "TCP", 00:21:05.994 "adrfam": "IPv4", 00:21:05.994 "traddr": "10.0.0.1", 00:21:05.994 "trsvcid": "37104" 00:21:05.994 }, 00:21:05.994 "auth": { 00:21:05.994 "state": "completed", 00:21:05.994 "digest": "sha512", 00:21:05.994 "dhgroup": "ffdhe4096" 00:21:05.994 } 00:21:05.994 } 00:21:05.994 ]' 00:21:05.994 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.994 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.994 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.994 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:05.994 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:06.253 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.253 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.253 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.513 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yjc1YTMwMDQ0MWY3ODA4MjE1NTZkYmM4ZTNmNmY5NjA3NGNmYjhhYzQ4ODc4N2VhMDY4Y2Q3MGQ0MzM3Yjc5N7wWLoc=: 00:21:07.447 06:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.448 06:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.448 06:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.448 06:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.448 06:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.448 06:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:07.448 06:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:07.448 06:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:07.448 06:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:07.705 06:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:07.705 06:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:07.705 06:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:07.705 06:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:07.705 06:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:07.705 06:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.705 06:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.705 06:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.705 06:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.705 06:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.705 06:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.705 06:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.273 00:21:08.273 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.273 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.273 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.532 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.532 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.532 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.532 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.532 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.532 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.532 { 00:21:08.532 "cntlid": 129, 00:21:08.532 "qid": 0, 00:21:08.532 "state": "enabled", 00:21:08.532 "thread": "nvmf_tgt_poll_group_000", 00:21:08.532 "listen_address": { 00:21:08.532 "trtype": "TCP", 00:21:08.532 "adrfam": "IPv4", 00:21:08.532 "traddr": "10.0.0.2", 00:21:08.532 "trsvcid": "4420" 00:21:08.532 }, 00:21:08.532 "peer_address": { 00:21:08.532 "trtype": "TCP", 00:21:08.532 "adrfam": "IPv4", 00:21:08.532 "traddr": "10.0.0.1", 00:21:08.532 "trsvcid": "37130" 00:21:08.532 }, 00:21:08.532 "auth": { 00:21:08.532 "state": "completed", 00:21:08.532 "digest": "sha512", 00:21:08.532 "dhgroup": "ffdhe6144" 00:21:08.532 } 00:21:08.532 } 00:21:08.532 ]' 00:21:08.532 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:08.532 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.532 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:08.532 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:08.532 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:08.532 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.532 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.532 06:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.791 06:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmY5ZTA0ODY0NDEzMWQyNzFjNWU4NmM3NjJjYzlhZjRjZWE1MDM5NzBkM2M2NmE4NwRzgw==: --dhchap-ctrl-secret DHHC-1:03:MzhhNjExNjFkNTBiYzBkZDdmNzJjZTYwMjg2MTNiMjRlYTcxNGY3MWRjZTU5NWE5MjUwNjAxNWExNjliZTc4NYPihnI=: 00:21:09.726 06:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.726 06:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.726 06:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.726 06:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.726 06:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.726 06:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:09.726 06:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:09.726 06:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:09.985 06:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:09.985 06:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:09.985 06:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:09.985 06:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:09.985 06:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:09.985 06:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.985 06:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.985 06:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.985 06:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.985 06:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.985 06:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.985 06:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.922 00:21:10.922 06:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:10.922 06:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:10.922 06:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.922 06:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.922 06:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.922 06:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.922 06:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.922 06:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.922 06:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:10.922 { 00:21:10.922 "cntlid": 131, 00:21:10.922 "qid": 0, 00:21:10.922 "state": "enabled", 00:21:10.922 "thread": "nvmf_tgt_poll_group_000", 00:21:10.922 "listen_address": { 00:21:10.922 "trtype": "TCP", 00:21:10.922 "adrfam": "IPv4", 00:21:10.922 "traddr": "10.0.0.2", 00:21:10.922 "trsvcid": "4420" 00:21:10.922 }, 00:21:10.922 "peer_address": { 00:21:10.922 "trtype": "TCP", 00:21:10.922 "adrfam": "IPv4", 00:21:10.922 "traddr": "10.0.0.1", 00:21:10.922 "trsvcid": "41252" 00:21:10.922 }, 00:21:10.922 "auth": { 00:21:10.922 "state": "completed", 00:21:10.922 "digest": "sha512", 00:21:10.922 "dhgroup": "ffdhe6144" 00:21:10.922 } 00:21:10.922 } 00:21:10.922 ]' 00:21:10.922 06:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:10.922 06:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.922 06:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:10.922 06:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:10.922 06:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.180 06:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.180 06:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.180 06:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.438 06:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDU4NTQxZTZiZDcxYjg4YjI4OWUyMTNhYmQ3NGIxM2OP7Tpp: --dhchap-ctrl-secret DHHC-1:02:OWY5ZjYyYjgxYWU1MGMzMzFhMTFkM2EwMGFlMmNiZWIxOGQ3MWNjMzM4ZDlmNDY1BHm2Aw==: 00:21:12.375 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.375 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.375 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.375 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.375 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.375 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:12.375 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:12.375 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:12.634 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:12.634 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:12.634 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:12.634 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:12.634 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:12.634 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.634 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.634 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.634 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.634 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.634 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.634 06:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.202 00:21:13.202 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.202 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.202 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.202 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.202 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.202 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.202 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.202 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.202 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.202 { 00:21:13.202 "cntlid": 133, 00:21:13.202 "qid": 0, 00:21:13.202 "state": "enabled", 00:21:13.202 "thread": "nvmf_tgt_poll_group_000", 00:21:13.202 "listen_address": { 00:21:13.202 "trtype": "TCP", 00:21:13.202 "adrfam": "IPv4", 00:21:13.202 "traddr": "10.0.0.2", 00:21:13.203 "trsvcid": "4420" 00:21:13.203 }, 00:21:13.203 "peer_address": { 00:21:13.203 "trtype": "TCP", 00:21:13.203 "adrfam": "IPv4", 00:21:13.203 "traddr": "10.0.0.1", 00:21:13.203 "trsvcid": "41280" 00:21:13.203 }, 00:21:13.203 "auth": { 00:21:13.203 "state": "completed", 00:21:13.203 "digest": "sha512", 00:21:13.203 "dhgroup": "ffdhe6144" 00:21:13.203 } 00:21:13.203 } 00:21:13.203 ]' 00:21:13.203 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.461 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.461 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.461 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:13.461 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.461 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.461 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.461 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.718 06:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzQ5ZTA3YWMyMTFiMGZjN2VhY2RkMWI0ZTllOWE0YTIwNjdkNTQxZTMyN2RjY2ZhPE/SUA==: --dhchap-ctrl-secret DHHC-1:01:YTU3MjhjYjQ1ZmQ1Y2MzNjFiNzMyOTIyM2Q2YmJiNmJ2uRg5: 00:21:14.654 06:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.654 06:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.655 06:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.655 06:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.655 06:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.655 06:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:14.655 06:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:14.655 06:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:14.913 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:14.913 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:14.913 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:14.913 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:14.913 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:14.913 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.913 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:14.913 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.913 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.913 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.913 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:14.913 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:15.513 00:21:15.513 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.513 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.513 06:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.772 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.772 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.772 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.772 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.772 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.772 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.772 { 00:21:15.772 "cntlid": 135, 00:21:15.772 "qid": 0, 00:21:15.772 "state": "enabled", 00:21:15.772 "thread": "nvmf_tgt_poll_group_000", 00:21:15.772 "listen_address": { 00:21:15.772 "trtype": "TCP", 00:21:15.772 "adrfam": "IPv4", 00:21:15.772 "traddr": "10.0.0.2", 00:21:15.772 "trsvcid": "4420" 00:21:15.772 }, 00:21:15.772 "peer_address": { 00:21:15.772 "trtype": "TCP", 00:21:15.772 "adrfam": "IPv4", 00:21:15.772 "traddr": "10.0.0.1", 00:21:15.772 "trsvcid": "41298" 00:21:15.772 }, 00:21:15.772 "auth": { 00:21:15.772 "state": "completed", 00:21:15.772 "digest": "sha512", 00:21:15.772 "dhgroup": "ffdhe6144" 00:21:15.772 } 00:21:15.772 } 00:21:15.772 ]' 00:21:15.772 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.772 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.772 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.030 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:16.030 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.030 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.030 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.030 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.288 06:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yjc1YTMwMDQ0MWY3ODA4MjE1NTZkYmM4ZTNmNmY5NjA3NGNmYjhhYzQ4ODc4N2VhMDY4Y2Q3MGQ0MzM3Yjc5N7wWLoc=: 00:21:17.220 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.220 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.220 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.220 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.220 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.220 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:17.220 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.220 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:17.220 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:17.478 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:17.478 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.478 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:17.478 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:17.478 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:17.478 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.478 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.478 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.478 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.478 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.478 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.478 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.415 00:21:18.415 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:18.415 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.415 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.673 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.673 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.673 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.673 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.673 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.673 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.673 { 00:21:18.673 "cntlid": 137, 00:21:18.673 "qid": 0, 00:21:18.673 "state": "enabled", 00:21:18.673 "thread": "nvmf_tgt_poll_group_000", 00:21:18.673 "listen_address": { 00:21:18.673 "trtype": "TCP", 00:21:18.673 "adrfam": "IPv4", 00:21:18.673 "traddr": "10.0.0.2", 00:21:18.673 "trsvcid": "4420" 00:21:18.673 }, 00:21:18.673 "peer_address": { 00:21:18.673 "trtype": "TCP", 00:21:18.673 "adrfam": "IPv4", 00:21:18.673 "traddr": "10.0.0.1", 00:21:18.673 "trsvcid": "41330" 00:21:18.673 }, 00:21:18.673 "auth": { 00:21:18.673 "state": "completed", 00:21:18.673 "digest": "sha512", 00:21:18.673 "dhgroup": "ffdhe8192" 00:21:18.673 } 00:21:18.673 } 00:21:18.673 ]' 00:21:18.673 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.673 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.673 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.673 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:18.673 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.673 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.673 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.673 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.931 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmY5ZTA0ODY0NDEzMWQyNzFjNWU4NmM3NjJjYzlhZjRjZWE1MDM5NzBkM2M2NmE4NwRzgw==: --dhchap-ctrl-secret DHHC-1:03:MzhhNjExNjFkNTBiYzBkZDdmNzJjZTYwMjg2MTNiMjRlYTcxNGY3MWRjZTU5NWE5MjUwNjAxNWExNjliZTc4NYPihnI=: 00:21:19.865 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.865 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.865 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.865 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.865 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.865 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.865 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:19.865 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:20.123 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:20.123 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:20.123 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:20.123 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:20.123 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:20.123 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.123 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.123 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.123 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.123 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.123 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.123 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.058 00:21:21.058 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:21.058 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:21.058 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.316 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.316 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.316 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.316 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.316 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.316 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:21.316 { 00:21:21.316 "cntlid": 139, 00:21:21.316 "qid": 0, 00:21:21.316 "state": "enabled", 00:21:21.316 "thread": "nvmf_tgt_poll_group_000", 00:21:21.316 "listen_address": { 00:21:21.316 "trtype": "TCP", 00:21:21.316 "adrfam": "IPv4", 00:21:21.316 "traddr": "10.0.0.2", 00:21:21.316 "trsvcid": "4420" 00:21:21.316 }, 00:21:21.316 "peer_address": { 00:21:21.316 "trtype": "TCP", 00:21:21.316 "adrfam": "IPv4", 00:21:21.316 "traddr": "10.0.0.1", 00:21:21.316 "trsvcid": "43424" 00:21:21.316 }, 00:21:21.316 "auth": { 00:21:21.316 "state": "completed", 00:21:21.316 "digest": "sha512", 00:21:21.316 "dhgroup": "ffdhe8192" 00:21:21.316 } 00:21:21.316 } 00:21:21.316 ]' 00:21:21.316 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:21.316 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.316 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:21.575 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:21.575 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:21.575 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.575 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.575 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.833 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDU4NTQxZTZiZDcxYjg4YjI4OWUyMTNhYmQ3NGIxM2OP7Tpp: --dhchap-ctrl-secret DHHC-1:02:OWY5ZjYyYjgxYWU1MGMzMzFhMTFkM2EwMGFlMmNiZWIxOGQ3MWNjMzM4ZDlmNDY1BHm2Aw==: 00:21:22.769 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.769 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.769 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.769 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.769 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.769 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:22.769 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:22.769 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:23.027 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:23.027 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.027 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:23.027 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:23.027 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:23.027 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.027 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.027 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.027 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.027 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.027 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.027 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.962 00:21:23.962 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:23.962 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:23.962 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.221 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.221 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.221 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.221 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.221 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.221 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.221 { 00:21:24.221 "cntlid": 141, 00:21:24.221 "qid": 0, 00:21:24.221 "state": "enabled", 00:21:24.221 "thread": "nvmf_tgt_poll_group_000", 00:21:24.221 "listen_address": { 00:21:24.221 "trtype": "TCP", 00:21:24.221 "adrfam": "IPv4", 00:21:24.221 "traddr": "10.0.0.2", 00:21:24.221 "trsvcid": "4420" 00:21:24.221 }, 00:21:24.221 "peer_address": { 00:21:24.221 "trtype": "TCP", 00:21:24.221 "adrfam": "IPv4", 00:21:24.221 "traddr": "10.0.0.1", 00:21:24.221 "trsvcid": "43460" 00:21:24.221 }, 00:21:24.221 "auth": { 00:21:24.221 "state": "completed", 00:21:24.221 "digest": "sha512", 00:21:24.221 "dhgroup": "ffdhe8192" 00:21:24.221 } 00:21:24.221 } 00:21:24.221 ]' 00:21:24.221 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.221 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.221 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.221 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:24.221 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.221 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.221 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.221 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.479 06:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzQ5ZTA3YWMyMTFiMGZjN2VhY2RkMWI0ZTllOWE0YTIwNjdkNTQxZTMyN2RjY2ZhPE/SUA==: --dhchap-ctrl-secret DHHC-1:01:YTU3MjhjYjQ1ZmQ1Y2MzNjFiNzMyOTIyM2Q2YmJiNmJ2uRg5: 00:21:25.417 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.417 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.417 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.417 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.417 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.417 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.417 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:25.417 06:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:25.677 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:25.677 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:25.677 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:25.677 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:25.677 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:25.677 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.677 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:25.677 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.677 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.936 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.936 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:25.936 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:26.872 00:21:26.872 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.872 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.872 06:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.872 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.872 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.872 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.872 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.872 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.872 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.872 { 00:21:26.872 "cntlid": 143, 00:21:26.872 "qid": 0, 00:21:26.872 "state": "enabled", 00:21:26.872 "thread": "nvmf_tgt_poll_group_000", 00:21:26.872 "listen_address": { 00:21:26.872 "trtype": "TCP", 00:21:26.872 "adrfam": "IPv4", 00:21:26.872 "traddr": "10.0.0.2", 00:21:26.872 "trsvcid": "4420" 00:21:26.872 }, 00:21:26.872 "peer_address": { 00:21:26.872 "trtype": "TCP", 00:21:26.872 "adrfam": "IPv4", 00:21:26.872 "traddr": "10.0.0.1", 00:21:26.872 "trsvcid": "43486" 00:21:26.872 }, 00:21:26.872 "auth": { 00:21:26.872 "state": "completed", 00:21:26.872 "digest": "sha512", 00:21:26.872 "dhgroup": "ffdhe8192" 00:21:26.872 } 00:21:26.872 } 00:21:26.872 ]' 00:21:26.873 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:27.131 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.131 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.131 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:27.131 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.131 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.131 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.131 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.389 06:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yjc1YTMwMDQ0MWY3ODA4MjE1NTZkYmM4ZTNmNmY5NjA3NGNmYjhhYzQ4ODc4N2VhMDY4Y2Q3MGQ0MzM3Yjc5N7wWLoc=: 00:21:28.325 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.326 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.326 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.326 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.326 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.326 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:28.326 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:28.326 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:28.326 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:28.326 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:28.326 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:28.584 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:28.584 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.584 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:28.584 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:28.584 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:28.584 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.584 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.584 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.584 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.584 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.584 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.584 06:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.522 00:21:29.522 06:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.522 06:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.522 06:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.780 06:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.780 06:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.780 06:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.780 06:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.780 06:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.780 06:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.780 { 00:21:29.780 "cntlid": 145, 00:21:29.780 "qid": 0, 00:21:29.780 "state": "enabled", 00:21:29.780 "thread": "nvmf_tgt_poll_group_000", 00:21:29.780 "listen_address": { 00:21:29.780 "trtype": "TCP", 00:21:29.780 "adrfam": "IPv4", 00:21:29.780 "traddr": "10.0.0.2", 00:21:29.780 "trsvcid": "4420" 00:21:29.780 }, 00:21:29.780 "peer_address": { 00:21:29.780 "trtype": "TCP", 00:21:29.780 "adrfam": "IPv4", 00:21:29.780 "traddr": "10.0.0.1", 00:21:29.780 "trsvcid": "50946" 00:21:29.780 }, 00:21:29.780 "auth": { 00:21:29.780 "state": "completed", 00:21:29.780 "digest": "sha512", 00:21:29.780 "dhgroup": "ffdhe8192" 00:21:29.780 } 00:21:29.780 } 00:21:29.780 ]' 00:21:29.780 06:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.780 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.780 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.780 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:29.780 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.780 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.780 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.780 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.361 06:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmY5ZTA0ODY0NDEzMWQyNzFjNWU4NmM3NjJjYzlhZjRjZWE1MDM5NzBkM2M2NmE4NwRzgw==: --dhchap-ctrl-secret DHHC-1:03:MzhhNjExNjFkNTBiYzBkZDdmNzJjZTYwMjg2MTNiMjRlYTcxNGY3MWRjZTU5NWE5MjUwNjAxNWExNjliZTc4NYPihnI=: 00:21:31.327 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.327 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.327 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.327 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.327 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.327 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:31.327 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.327 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.327 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.327 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:31.327 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:31.327 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:31.327 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:31.327 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:31.327 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:31.327 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:31.327 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:31.327 06:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:32.268 request: 00:21:32.268 { 00:21:32.268 "name": "nvme0", 00:21:32.268 "trtype": "tcp", 00:21:32.268 "traddr": "10.0.0.2", 00:21:32.268 "adrfam": "ipv4", 00:21:32.268 "trsvcid": "4420", 00:21:32.268 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:32.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:32.268 "prchk_reftag": false, 00:21:32.268 "prchk_guard": false, 00:21:32.268 "hdgst": false, 00:21:32.268 "ddgst": false, 00:21:32.268 "dhchap_key": "key2", 00:21:32.268 "method": "bdev_nvme_attach_controller", 00:21:32.268 "req_id": 1 00:21:32.268 } 00:21:32.268 Got JSON-RPC error response 00:21:32.268 response: 00:21:32.268 { 00:21:32.268 "code": -5, 00:21:32.268 "message": "Input/output error" 00:21:32.268 } 00:21:32.268 06:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:32.268 06:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:32.268 06:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:32.268 06:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:32.268 06:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.268 06:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.268 06:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.268 06:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.268 06:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.268 06:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.268 06:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.268 06:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.268 06:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:32.268 06:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:32.268 06:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:32.268 06:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:32.268 06:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:32.268 06:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:32.269 06:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:32.269 06:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:32.269 06:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:32.836 request: 00:21:32.836 { 00:21:32.836 "name": "nvme0", 00:21:32.836 "trtype": "tcp", 00:21:32.836 "traddr": "10.0.0.2", 00:21:32.836 "adrfam": "ipv4", 00:21:32.836 "trsvcid": "4420", 00:21:32.836 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:32.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:32.836 "prchk_reftag": false, 00:21:32.836 "prchk_guard": false, 00:21:32.836 "hdgst": false, 00:21:32.836 "ddgst": false, 00:21:32.836 "dhchap_key": "key1", 00:21:32.836 "dhchap_ctrlr_key": "ckey2", 00:21:32.836 "method": "bdev_nvme_attach_controller", 00:21:32.836 "req_id": 1 00:21:32.836 } 00:21:32.836 Got JSON-RPC error response 00:21:32.836 response: 00:21:32.836 { 00:21:32.836 "code": -5, 00:21:32.836 "message": "Input/output error" 00:21:32.836 } 00:21:32.836 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:32.836 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:32.836 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:32.836 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:32.836 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.836 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.836 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.836 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.836 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:32.836 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.836 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.836 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.836 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.836 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:32.836 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.836 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:32.836 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:32.836 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:32.836 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:32.836 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.836 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.782 request: 00:21:33.782 { 00:21:33.782 "name": "nvme0", 00:21:33.782 "trtype": "tcp", 00:21:33.782 "traddr": "10.0.0.2", 00:21:33.782 "adrfam": "ipv4", 00:21:33.782 "trsvcid": "4420", 00:21:33.782 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:33.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:33.782 "prchk_reftag": false, 00:21:33.782 "prchk_guard": false, 00:21:33.782 "hdgst": false, 00:21:33.782 "ddgst": false, 00:21:33.782 "dhchap_key": "key1", 00:21:33.782 "dhchap_ctrlr_key": "ckey1", 00:21:33.782 "method": "bdev_nvme_attach_controller", 00:21:33.782 "req_id": 1 00:21:33.782 } 00:21:33.782 Got JSON-RPC error response 00:21:33.782 response: 00:21:33.782 { 00:21:33.782 "code": -5, 00:21:33.782 "message": "Input/output error" 00:21:33.782 } 00:21:33.782 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:33.782 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:33.782 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:33.782 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:33.782 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.782 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.782 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.782 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.782 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 147738 00:21:33.782 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 147738 ']' 00:21:33.782 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 147738 00:21:33.782 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:33.782 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:33.782 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 147738 00:21:33.782 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:33.782 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:33.782 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 147738' 00:21:33.782 killing process with pid 147738 00:21:33.782 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 147738 00:21:33.782 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 147738 00:21:35.161 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:35.161 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:35.161 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:35.161 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.161 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=170601 00:21:35.161 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:35.161 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 170601 00:21:35.161 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 170601 ']' 00:21:35.161 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.161 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:35.161 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.161 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:35.161 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.100 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:36.100 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:36.100 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:36.100 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:36.100 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.100 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.100 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:36.100 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 170601 00:21:36.100 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 170601 ']' 00:21:36.100 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.100 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:36.100 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.100 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:36.100 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.358 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:36.358 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:36.358 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:21:36.358 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.358 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.924 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.924 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:21:36.924 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:36.924 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:36.924 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:36.924 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:36.924 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.924 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:36.924 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.924 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.924 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.924 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:36.924 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:37.861 00:21:37.861 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.861 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.861 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.861 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.861 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.861 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.861 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.861 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.861 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.862 { 00:21:37.862 "cntlid": 1, 00:21:37.862 "qid": 0, 00:21:37.862 "state": "enabled", 00:21:37.862 "thread": "nvmf_tgt_poll_group_000", 00:21:37.862 "listen_address": { 00:21:37.862 "trtype": "TCP", 00:21:37.862 "adrfam": "IPv4", 00:21:37.862 "traddr": "10.0.0.2", 00:21:37.862 "trsvcid": "4420" 00:21:37.862 }, 00:21:37.862 "peer_address": { 00:21:37.862 "trtype": "TCP", 00:21:37.862 "adrfam": "IPv4", 00:21:37.862 "traddr": "10.0.0.1", 00:21:37.862 "trsvcid": "50988" 00:21:37.862 }, 00:21:37.862 "auth": { 00:21:37.862 "state": "completed", 00:21:37.862 "digest": "sha512", 00:21:37.862 "dhgroup": "ffdhe8192" 00:21:37.862 } 00:21:37.862 } 00:21:37.862 ]' 00:21:37.862 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.120 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.120 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.120 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:38.120 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.120 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.120 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.120 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.377 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yjc1YTMwMDQ0MWY3ODA4MjE1NTZkYmM4ZTNmNmY5NjA3NGNmYjhhYzQ4ODc4N2VhMDY4Y2Q3MGQ0MzM3Yjc5N7wWLoc=: 00:21:39.315 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.316 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.316 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.316 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.316 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.316 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:39.316 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.316 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.316 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.316 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:39.316 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:39.574 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.574 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:39.574 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.574 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:39.574 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:39.574 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:39.574 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:39.574 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.574 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.832 request: 00:21:39.832 { 00:21:39.832 "name": "nvme0", 00:21:39.832 "trtype": "tcp", 00:21:39.832 "traddr": "10.0.0.2", 00:21:39.832 "adrfam": "ipv4", 00:21:39.832 "trsvcid": "4420", 00:21:39.832 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:39.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:39.832 "prchk_reftag": false, 00:21:39.832 "prchk_guard": false, 00:21:39.832 "hdgst": false, 00:21:39.832 "ddgst": false, 00:21:39.832 "dhchap_key": "key3", 00:21:39.832 "method": "bdev_nvme_attach_controller", 00:21:39.832 "req_id": 1 00:21:39.832 } 00:21:39.832 Got JSON-RPC error response 00:21:39.832 response: 00:21:39.832 { 00:21:39.832 "code": -5, 00:21:39.832 "message": "Input/output error" 00:21:39.832 } 00:21:39.832 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:39.832 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:39.832 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:39.832 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:39.832 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:21:39.832 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:21:39.832 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:39.832 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:40.091 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:40.091 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:40.091 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:40.091 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:40.091 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:40.091 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:40.091 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:40.091 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:40.091 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:40.351 request: 00:21:40.351 { 00:21:40.351 "name": "nvme0", 00:21:40.351 "trtype": "tcp", 00:21:40.351 "traddr": "10.0.0.2", 00:21:40.351 "adrfam": "ipv4", 00:21:40.351 "trsvcid": "4420", 00:21:40.351 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:40.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:40.351 "prchk_reftag": false, 00:21:40.351 "prchk_guard": false, 00:21:40.351 "hdgst": false, 00:21:40.351 "ddgst": false, 00:21:40.351 "dhchap_key": "key3", 00:21:40.351 "method": "bdev_nvme_attach_controller", 00:21:40.351 "req_id": 1 00:21:40.351 } 00:21:40.351 Got JSON-RPC error response 00:21:40.351 response: 00:21:40.351 { 00:21:40.351 "code": -5, 00:21:40.351 "message": "Input/output error" 00:21:40.351 } 00:21:40.351 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:40.351 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:40.351 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:40.351 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:40.351 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:40.351 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:40.351 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:40.351 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:40.351 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:40.351 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:40.610 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.610 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.610 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.610 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.610 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.610 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.610 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.610 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.610 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:40.610 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:40.610 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:40.610 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:40.610 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:40.610 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:40.610 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:40.610 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:40.610 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:40.869 request: 00:21:40.869 { 00:21:40.869 "name": "nvme0", 00:21:40.869 "trtype": "tcp", 00:21:40.869 "traddr": "10.0.0.2", 00:21:40.869 "adrfam": "ipv4", 00:21:40.869 "trsvcid": "4420", 00:21:40.869 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:40.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:40.869 "prchk_reftag": false, 00:21:40.869 "prchk_guard": false, 00:21:40.869 "hdgst": false, 00:21:40.869 "ddgst": false, 00:21:40.869 "dhchap_key": "key0", 00:21:40.869 "dhchap_ctrlr_key": "key1", 00:21:40.869 "method": "bdev_nvme_attach_controller", 00:21:40.869 "req_id": 1 00:21:40.869 } 00:21:40.869 Got JSON-RPC error response 00:21:40.869 response: 00:21:40.869 { 00:21:40.869 "code": -5, 00:21:40.869 "message": "Input/output error" 00:21:40.869 } 00:21:40.869 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:40.869 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:40.869 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:40.869 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:40.869 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:40.869 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:41.126 00:21:41.126 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:41.126 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:41.126 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.384 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.384 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.384 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.642 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:41.642 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:41.642 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 147886 00:21:41.642 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 147886 ']' 00:21:41.642 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 147886 00:21:41.642 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:41.642 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:41.642 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 147886 00:21:41.901 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:41.901 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:41.901 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 147886' 00:21:41.901 killing process with pid 147886 00:21:41.901 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 147886 00:21:41.901 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 147886 00:21:44.438 06:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:44.438 06:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:44.438 06:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:44.438 06:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:44.438 06:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:44.439 06:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:44.439 06:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:44.439 rmmod nvme_tcp 00:21:44.439 rmmod nvme_fabrics 00:21:44.439 rmmod nvme_keyring 00:21:44.439 06:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:44.439 06:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:44.439 06:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:44.439 06:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 170601 ']' 00:21:44.439 06:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 170601 00:21:44.439 06:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 170601 ']' 00:21:44.439 06:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 170601 00:21:44.439 06:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:44.439 06:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:44.439 06:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 170601 00:21:44.439 06:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:44.439 06:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:44.439 06:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 170601' 00:21:44.439 killing process with pid 170601 00:21:44.439 06:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 170601 00:21:44.439 06:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 170601 00:21:45.376 06:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:45.376 06:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:45.377 06:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:45.377 06:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:45.377 06:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:45.377 06:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.377 06:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:45.377 06:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.voQ /tmp/spdk.key-sha256.c9G /tmp/spdk.key-sha384.ar3 /tmp/spdk.key-sha512.JPj /tmp/spdk.key-sha512.Nzx /tmp/spdk.key-sha384.Pnu /tmp/spdk.key-sha256.n5w '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:47.911 00:21:47.911 real 3m16.086s 00:21:47.911 user 7m32.578s 00:21:47.911 sys 0m24.949s 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.911 ************************************ 00:21:47.911 END TEST nvmf_auth_target 00:21:47.911 ************************************ 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:47.911 ************************************ 00:21:47.911 START TEST nvmf_bdevio_no_huge 00:21:47.911 ************************************ 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:47.911 * Looking for test storage... 00:21:47.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:47.911 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.912 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:47.912 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:47.912 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:47.912 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.912 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.912 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.912 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:47.912 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:47.912 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:47.912 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:47.912 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:47.912 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:47.912 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:47.912 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.912 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:47.912 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:47.912 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:47.912 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.912 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.912 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.912 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:47.912 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:47.912 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:47.912 06:13:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:49.844 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:49.845 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:49.845 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:49.845 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:49.845 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:49.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:21:49.845 00:21:49.845 --- 10.0.0.2 ping statistics --- 00:21:49.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.845 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:21:49.845 00:21:49.845 --- 10.0.0.1 ping statistics --- 00:21:49.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.845 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=173765 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 173765 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 173765 ']' 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:49.845 06:14:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:49.845 [2024-07-26 06:14:00.986834] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:49.845 [2024-07-26 06:14:00.986970] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:49.845 [2024-07-26 06:14:01.142858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:50.105 [2024-07-26 06:14:01.425644] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.105 [2024-07-26 06:14:01.425734] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.105 [2024-07-26 06:14:01.425763] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.105 [2024-07-26 06:14:01.425784] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.105 [2024-07-26 06:14:01.425808] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.105 [2024-07-26 06:14:01.426029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:50.105 [2024-07-26 06:14:01.426105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:21:50.105 [2024-07-26 06:14:01.426341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:50.105 [2024-07-26 06:14:01.426350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:21:50.672 06:14:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:50.672 06:14:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:21:50.672 06:14:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:50.672 06:14:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:50.672 06:14:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:50.672 06:14:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.672 06:14:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:50.672 06:14:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.672 06:14:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:50.672 [2024-07-26 06:14:01.956827] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.672 06:14:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.672 06:14:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:50.672 06:14:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.672 06:14:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:50.930 Malloc0 00:21:50.930 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.930 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:50.930 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.930 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:50.930 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.930 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:50.930 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.930 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:50.930 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.930 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:50.930 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.930 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:50.930 [2024-07-26 06:14:02.047932] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.930 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.930 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:50.930 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:50.930 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:50.930 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:50.930 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:50.930 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:50.930 { 00:21:50.930 "params": { 00:21:50.930 "name": "Nvme$subsystem", 00:21:50.930 "trtype": "$TEST_TRANSPORT", 00:21:50.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.930 "adrfam": "ipv4", 00:21:50.930 "trsvcid": "$NVMF_PORT", 00:21:50.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.930 "hdgst": ${hdgst:-false}, 00:21:50.930 "ddgst": ${ddgst:-false} 00:21:50.930 }, 00:21:50.930 "method": "bdev_nvme_attach_controller" 00:21:50.930 } 00:21:50.930 EOF 00:21:50.930 )") 00:21:50.930 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:50.930 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:50.930 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:50.930 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:50.930 "params": { 00:21:50.930 "name": "Nvme1", 00:21:50.930 "trtype": "tcp", 00:21:50.930 "traddr": "10.0.0.2", 00:21:50.930 "adrfam": "ipv4", 00:21:50.930 "trsvcid": "4420", 00:21:50.930 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.930 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:50.930 "hdgst": false, 00:21:50.930 "ddgst": false 00:21:50.930 }, 00:21:50.930 "method": "bdev_nvme_attach_controller" 00:21:50.930 }' 00:21:50.930 [2024-07-26 06:14:02.127245] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:50.930 [2024-07-26 06:14:02.127391] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid173925 ] 00:21:51.188 [2024-07-26 06:14:02.270223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:51.454 [2024-07-26 06:14:02.526836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.454 [2024-07-26 06:14:02.526881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.454 [2024-07-26 06:14:02.526885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.026 I/O targets: 00:21:52.026 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:52.026 00:21:52.026 00:21:52.026 CUnit - A unit testing framework for C - Version 2.1-3 00:21:52.026 http://cunit.sourceforge.net/ 00:21:52.026 00:21:52.026 00:21:52.026 Suite: bdevio tests on: Nvme1n1 00:21:52.026 Test: blockdev write read block ...passed 00:21:52.026 Test: blockdev write zeroes read block ...passed 00:21:52.026 Test: blockdev write zeroes read no split ...passed 00:21:52.026 Test: blockdev write zeroes read split ...passed 00:21:52.285 Test: blockdev write zeroes read split partial ...passed 00:21:52.285 Test: blockdev reset ...[2024-07-26 06:14:03.362305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:52.285 [2024-07-26 06:14:03.362506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f1100 (9): Bad file descriptor 00:21:52.285 [2024-07-26 06:14:03.422718] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:52.285 passed 00:21:52.285 Test: blockdev write read 8 blocks ...passed 00:21:52.285 Test: blockdev write read size > 128k ...passed 00:21:52.285 Test: blockdev write read invalid size ...passed 00:21:52.285 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:52.285 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:52.285 Test: blockdev write read max offset ...passed 00:21:52.285 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:52.543 Test: blockdev writev readv 8 blocks ...passed 00:21:52.543 Test: blockdev writev readv 30 x 1block ...passed 00:21:52.543 Test: blockdev writev readv block ...passed 00:21:52.543 Test: blockdev writev readv size > 128k ...passed 00:21:52.543 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:52.543 Test: blockdev comparev and writev ...[2024-07-26 06:14:03.686037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:52.543 [2024-07-26 06:14:03.686123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:52.543 [2024-07-26 06:14:03.686162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:52.543 [2024-07-26 06:14:03.686198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.543 [2024-07-26 06:14:03.686703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:52.543 [2024-07-26 06:14:03.686739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:52.543 [2024-07-26 06:14:03.686774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:52.543 [2024-07-26 06:14:03.686800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:52.543 [2024-07-26 06:14:03.687273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:52.543 [2024-07-26 06:14:03.687314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:52.543 [2024-07-26 06:14:03.687348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:52.543 [2024-07-26 06:14:03.687374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:52.543 [2024-07-26 06:14:03.687852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:52.543 [2024-07-26 06:14:03.687885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:52.543 [2024-07-26 06:14:03.687919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:52.544 [2024-07-26 06:14:03.687944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:52.544 passed 00:21:52.544 Test: blockdev nvme passthru rw ...passed 00:21:52.544 Test: blockdev nvme passthru vendor specific ...[2024-07-26 06:14:03.770556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:52.544 [2024-07-26 06:14:03.770618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:52.544 [2024-07-26 06:14:03.771000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:52.544 [2024-07-26 06:14:03.771039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:52.544 [2024-07-26 06:14:03.771297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:52.544 [2024-07-26 06:14:03.771329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:52.544 [2024-07-26 06:14:03.771548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:52.544 [2024-07-26 06:14:03.771582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:52.544 passed 00:21:52.544 Test: blockdev nvme admin passthru ...passed 00:21:52.544 Test: blockdev copy ...passed 00:21:52.544 00:21:52.544 Run Summary: Type Total Ran Passed Failed Inactive 00:21:52.544 suites 1 1 n/a 0 0 00:21:52.544 tests 23 23 23 0 0 00:21:52.544 asserts 152 152 152 0 n/a 00:21:52.544 00:21:52.544 Elapsed time = 1.420 seconds 00:21:53.109 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:53.368 rmmod nvme_tcp 00:21:53.368 rmmod nvme_fabrics 00:21:53.368 rmmod nvme_keyring 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 173765 ']' 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 173765 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 173765 ']' 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 173765 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 173765 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 173765' 00:21:53.368 killing process with pid 173765 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 173765 00:21:53.368 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 173765 00:21:54.306 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:21:54.306 06:14:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:54.306 06:14:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:54.306 06:14:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:54.306 06:14:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:54.306 06:14:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:54.306 06:14:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.306 06:14:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.306 06:14:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:56.840 00:21:56.840 real 0m8.777s 00:21:56.840 user 0m20.749s 00:21:56.840 sys 0m2.796s 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:56.840 ************************************ 00:21:56.840 END TEST nvmf_bdevio_no_huge 00:21:56.840 ************************************ 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:56.840 ************************************ 00:21:56.840 START TEST nvmf_tls 00:21:56.840 ************************************ 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:56.840 * Looking for test storage... 00:21:56.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:56.840 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.841 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:56.841 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:56.841 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:56.841 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:56.841 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.841 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.841 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:56.841 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:56.841 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:56.841 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:56.841 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:56.841 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:56.841 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.841 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:56.841 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:56.841 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:56.841 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.841 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:56.841 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.841 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:56.841 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:56.841 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:56.841 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:58.749 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:58.749 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:58.749 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:58.749 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:58.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:58.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:21:58.749 00:21:58.749 --- 10.0.0.2 ping statistics --- 00:21:58.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.749 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:58.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:58.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:21:58.749 00:21:58.749 --- 10.0.0.1 ping statistics --- 00:21:58.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.749 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=176246 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 176246 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 176246 ']' 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:58.749 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.749 [2024-07-26 06:14:09.816198] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:58.749 [2024-07-26 06:14:09.816326] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.749 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.749 [2024-07-26 06:14:09.963027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.010 [2024-07-26 06:14:10.219511] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.010 [2024-07-26 06:14:10.219589] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.010 [2024-07-26 06:14:10.219616] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.010 [2024-07-26 06:14:10.219642] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.010 [2024-07-26 06:14:10.219663] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.010 [2024-07-26 06:14:10.219716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.578 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:59.578 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:59.578 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:59.578 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:59.578 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.578 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.578 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:59.578 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:59.836 true 00:21:59.836 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:59.836 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:00.095 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:00.095 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:00.095 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:00.352 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:00.352 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:00.609 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:00.609 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:00.609 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:00.867 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:00.867 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:01.126 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:01.126 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:01.126 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:01.126 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:01.384 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:01.384 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:01.384 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:01.643 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:01.643 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:01.901 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:01.901 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:01.902 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:02.159 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:02.159 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:02.416 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:02.416 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:02.416 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:02.416 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:02.416 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:02.416 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:02.416 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:02.416 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:02.416 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:02.673 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:02.674 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:02.674 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:02.674 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:02.674 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:02.674 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:02.674 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:02.674 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:02.674 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:02.674 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:02.674 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.RvJxr0t4tR 00:22:02.674 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:02.674 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.PsXrQxYBXI 00:22:02.674 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:02.674 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:02.674 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.RvJxr0t4tR 00:22:02.674 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.PsXrQxYBXI 00:22:02.674 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:02.981 06:14:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:03.549 06:14:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.RvJxr0t4tR 00:22:03.550 06:14:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.RvJxr0t4tR 00:22:03.550 06:14:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:03.807 [2024-07-26 06:14:14.977043] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.807 06:14:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:04.064 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:04.322 [2024-07-26 06:14:15.506591] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:04.322 [2024-07-26 06:14:15.506949] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.322 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:04.579 malloc0 00:22:04.579 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:04.836 06:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RvJxr0t4tR 00:22:05.096 [2024-07-26 06:14:16.282157] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:05.096 06:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.RvJxr0t4tR 00:22:05.096 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.314 Initializing NVMe Controllers 00:22:17.314 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:17.314 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:17.314 Initialization complete. Launching workers. 00:22:17.314 ======================================================== 00:22:17.314 Latency(us) 00:22:17.314 Device Information : IOPS MiB/s Average min max 00:22:17.314 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5663.99 22.12 11303.76 2353.60 13687.59 00:22:17.314 ======================================================== 00:22:17.314 Total : 5663.99 22.12 11303.76 2353.60 13687.59 00:22:17.314 00:22:17.314 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RvJxr0t4tR 00:22:17.315 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:17.315 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:17.315 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:17.315 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RvJxr0t4tR' 00:22:17.315 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:17.315 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=178273 00:22:17.315 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:17.315 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:17.315 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 178273 /var/tmp/bdevperf.sock 00:22:17.315 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 178273 ']' 00:22:17.315 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:17.315 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:17.315 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:17.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:17.315 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:17.315 06:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.315 [2024-07-26 06:14:26.603039] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:17.315 [2024-07-26 06:14:26.603230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178273 ] 00:22:17.315 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.315 [2024-07-26 06:14:26.726141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.315 [2024-07-26 06:14:26.957516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.315 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:17.315 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:17.315 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RvJxr0t4tR 00:22:17.315 [2024-07-26 06:14:27.745549] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:17.315 [2024-07-26 06:14:27.745737] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:17.315 TLSTESTn1 00:22:17.315 06:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:17.315 Running I/O for 10 seconds... 00:22:27.295 00:22:27.295 Latency(us) 00:22:27.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.295 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:27.295 Verification LBA range: start 0x0 length 0x2000 00:22:27.295 TLSTESTn1 : 10.04 1556.25 6.08 0.00 0.00 82093.76 18155.90 74565.40 00:22:27.295 =================================================================================================================== 00:22:27.295 Total : 1556.25 6.08 0.00 0.00 82093.76 18155.90 74565.40 00:22:27.295 0 00:22:27.295 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:27.295 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 178273 00:22:27.295 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 178273 ']' 00:22:27.295 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 178273 00:22:27.295 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:27.295 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:27.295 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 178273 00:22:27.295 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:27.295 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:27.295 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 178273' 00:22:27.295 killing process with pid 178273 00:22:27.295 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 178273 00:22:27.295 Received shutdown signal, test time was about 10.000000 seconds 00:22:27.295 00:22:27.295 Latency(us) 00:22:27.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.295 =================================================================================================================== 00:22:27.295 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:27.295 [2024-07-26 06:14:38.054428] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:27.295 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 178273 00:22:27.862 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:22:27.862 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PsXrQxYBXI 00:22:27.862 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:27.862 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PsXrQxYBXI 00:22:27.862 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:27.862 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:27.862 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:27.862 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:27.862 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PsXrQxYBXI 00:22:27.862 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:27.862 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:27.862 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:27.862 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PsXrQxYBXI' 00:22:27.862 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:27.862 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=179722 00:22:27.862 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:27.862 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:27.862 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 179722 /var/tmp/bdevperf.sock 00:22:27.862 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 179722 ']' 00:22:27.862 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:27.862 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:27.862 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:27.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:27.862 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:27.862 06:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.862 [2024-07-26 06:14:39.070153] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:27.862 [2024-07-26 06:14:39.070303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179722 ] 00:22:27.862 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.862 [2024-07-26 06:14:39.193706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.120 [2024-07-26 06:14:39.421200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.703 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:28.703 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:28.703 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PsXrQxYBXI 00:22:28.962 [2024-07-26 06:14:40.260997] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:28.962 [2024-07-26 06:14:40.261239] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:28.962 [2024-07-26 06:14:40.271712] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:28.962 [2024-07-26 06:14:40.272328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:22:28.962 [2024-07-26 06:14:40.273303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:22:28.962 [2024-07-26 06:14:40.274295] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:28.962 [2024-07-26 06:14:40.274334] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:28.962 [2024-07-26 06:14:40.274362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:28.962 request: 00:22:28.962 { 00:22:28.962 "name": "TLSTEST", 00:22:28.962 "trtype": "tcp", 00:22:28.962 "traddr": "10.0.0.2", 00:22:28.962 "adrfam": "ipv4", 00:22:28.962 "trsvcid": "4420", 00:22:28.962 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.962 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:28.962 "prchk_reftag": false, 00:22:28.962 "prchk_guard": false, 00:22:28.962 "hdgst": false, 00:22:28.962 "ddgst": false, 00:22:28.962 "psk": "/tmp/tmp.PsXrQxYBXI", 00:22:28.962 "method": "bdev_nvme_attach_controller", 00:22:28.962 "req_id": 1 00:22:28.962 } 00:22:28.962 Got JSON-RPC error response 00:22:28.962 response: 00:22:28.962 { 00:22:28.962 "code": -5, 00:22:28.962 "message": "Input/output error" 00:22:28.962 } 00:22:28.962 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 179722 00:22:28.962 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 179722 ']' 00:22:28.962 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 179722 00:22:28.962 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:29.221 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:29.221 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 179722 00:22:29.221 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:29.221 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:29.221 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 179722' 00:22:29.221 killing process with pid 179722 00:22:29.221 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 179722 00:22:29.221 Received shutdown signal, test time was about 10.000000 seconds 00:22:29.221 00:22:29.221 Latency(us) 00:22:29.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.221 =================================================================================================================== 00:22:29.221 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:29.221 [2024-07-26 06:14:40.324389] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:29.221 06:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 179722 00:22:30.159 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RvJxr0t4tR 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RvJxr0t4tR 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RvJxr0t4tR 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RvJxr0t4tR' 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=179993 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 179993 /var/tmp/bdevperf.sock 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 179993 ']' 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:30.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:30.159 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.159 [2024-07-26 06:14:41.367616] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:30.159 [2024-07-26 06:14:41.367773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179993 ] 00:22:30.159 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.159 [2024-07-26 06:14:41.489146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.419 [2024-07-26 06:14:41.716609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.988 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:30.988 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:30.988 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.RvJxr0t4tR 00:22:31.246 [2024-07-26 06:14:42.539571] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:31.246 [2024-07-26 06:14:42.539762] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:31.246 [2024-07-26 06:14:42.552379] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:31.246 [2024-07-26 06:14:42.552434] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:31.246 [2024-07-26 06:14:42.552513] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:31.246 [2024-07-26 06:14:42.552743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:22:31.246 [2024-07-26 06:14:42.553713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:22:31.246 [2024-07-26 06:14:42.554707] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:31.246 [2024-07-26 06:14:42.554751] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:31.246 [2024-07-26 06:14:42.554777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:31.246 request: 00:22:31.246 { 00:22:31.246 "name": "TLSTEST", 00:22:31.246 "trtype": "tcp", 00:22:31.246 "traddr": "10.0.0.2", 00:22:31.246 "adrfam": "ipv4", 00:22:31.246 "trsvcid": "4420", 00:22:31.246 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.246 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:31.246 "prchk_reftag": false, 00:22:31.246 "prchk_guard": false, 00:22:31.246 "hdgst": false, 00:22:31.246 "ddgst": false, 00:22:31.246 "psk": "/tmp/tmp.RvJxr0t4tR", 00:22:31.246 "method": "bdev_nvme_attach_controller", 00:22:31.246 "req_id": 1 00:22:31.246 } 00:22:31.246 Got JSON-RPC error response 00:22:31.246 response: 00:22:31.246 { 00:22:31.246 "code": -5, 00:22:31.246 "message": "Input/output error" 00:22:31.246 } 00:22:31.246 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 179993 00:22:31.246 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 179993 ']' 00:22:31.246 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 179993 00:22:31.246 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:31.246 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:31.246 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 179993 00:22:31.504 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:31.504 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:31.504 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 179993' 00:22:31.504 killing process with pid 179993 00:22:31.504 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 179993 00:22:31.504 Received shutdown signal, test time was about 10.000000 seconds 00:22:31.504 00:22:31.504 Latency(us) 00:22:31.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.504 =================================================================================================================== 00:22:31.504 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:31.504 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 179993 00:22:31.504 [2024-07-26 06:14:42.595462] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:32.453 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RvJxr0t4tR 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RvJxr0t4tR 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RvJxr0t4tR 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RvJxr0t4tR' 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=180263 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 180263 /var/tmp/bdevperf.sock 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 180263 ']' 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:32.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:32.453 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.453 [2024-07-26 06:14:43.588748] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:32.453 [2024-07-26 06:14:43.588902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180263 ] 00:22:32.453 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.453 [2024-07-26 06:14:43.707869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.723 [2024-07-26 06:14:43.934847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.290 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:33.290 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:33.290 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RvJxr0t4tR 00:22:33.550 [2024-07-26 06:14:44.755912] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:33.550 [2024-07-26 06:14:44.756150] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:33.550 [2024-07-26 06:14:44.768324] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:33.550 [2024-07-26 06:14:44.768367] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:33.550 [2024-07-26 06:14:44.768454] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:33.550 [2024-07-26 06:14:44.769402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:22:33.550 [2024-07-26 06:14:44.770389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:22:33.550 [2024-07-26 06:14:44.771382] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:33.550 [2024-07-26 06:14:44.771419] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:33.550 [2024-07-26 06:14:44.771447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:33.550 request: 00:22:33.550 { 00:22:33.550 "name": "TLSTEST", 00:22:33.550 "trtype": "tcp", 00:22:33.550 "traddr": "10.0.0.2", 00:22:33.550 "adrfam": "ipv4", 00:22:33.550 "trsvcid": "4420", 00:22:33.550 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:33.550 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:33.550 "prchk_reftag": false, 00:22:33.550 "prchk_guard": false, 00:22:33.550 "hdgst": false, 00:22:33.550 "ddgst": false, 00:22:33.550 "psk": "/tmp/tmp.RvJxr0t4tR", 00:22:33.550 "method": "bdev_nvme_attach_controller", 00:22:33.550 "req_id": 1 00:22:33.550 } 00:22:33.550 Got JSON-RPC error response 00:22:33.550 response: 00:22:33.550 { 00:22:33.550 "code": -5, 00:22:33.550 "message": "Input/output error" 00:22:33.550 } 00:22:33.550 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 180263 00:22:33.550 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 180263 ']' 00:22:33.551 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 180263 00:22:33.551 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:33.551 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:33.551 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 180263 00:22:33.551 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:33.551 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:33.551 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 180263' 00:22:33.551 killing process with pid 180263 00:22:33.551 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 180263 00:22:33.551 Received shutdown signal, test time was about 10.000000 seconds 00:22:33.551 00:22:33.551 Latency(us) 00:22:33.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.551 =================================================================================================================== 00:22:33.551 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:33.551 [2024-07-26 06:14:44.820600] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:33.551 06:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 180263 00:22:34.488 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=180541 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 180541 /var/tmp/bdevperf.sock 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 180541 ']' 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:34.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:34.488 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.746 [2024-07-26 06:14:45.831428] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:34.746 [2024-07-26 06:14:45.831564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180541 ] 00:22:34.746 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.746 [2024-07-26 06:14:45.956641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.003 [2024-07-26 06:14:46.183008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.570 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:35.570 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:35.570 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:35.829 [2024-07-26 06:14:47.034274] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:35.829 [2024-07-26 06:14:47.035575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:22:35.829 [2024-07-26 06:14:47.036564] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:35.829 [2024-07-26 06:14:47.036608] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:35.829 [2024-07-26 06:14:47.036631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:35.829 request: 00:22:35.829 { 00:22:35.829 "name": "TLSTEST", 00:22:35.829 "trtype": "tcp", 00:22:35.829 "traddr": "10.0.0.2", 00:22:35.829 "adrfam": "ipv4", 00:22:35.829 "trsvcid": "4420", 00:22:35.829 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:35.829 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:35.829 "prchk_reftag": false, 00:22:35.829 "prchk_guard": false, 00:22:35.829 "hdgst": false, 00:22:35.829 "ddgst": false, 00:22:35.829 "method": "bdev_nvme_attach_controller", 00:22:35.829 "req_id": 1 00:22:35.829 } 00:22:35.829 Got JSON-RPC error response 00:22:35.829 response: 00:22:35.829 { 00:22:35.829 "code": -5, 00:22:35.829 "message": "Input/output error" 00:22:35.829 } 00:22:35.829 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 180541 00:22:35.829 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 180541 ']' 00:22:35.829 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 180541 00:22:35.829 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:35.829 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:35.829 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 180541 00:22:35.829 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:35.829 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:35.829 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 180541' 00:22:35.830 killing process with pid 180541 00:22:35.830 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 180541 00:22:35.830 Received shutdown signal, test time was about 10.000000 seconds 00:22:35.830 00:22:35.830 Latency(us) 00:22:35.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.830 =================================================================================================================== 00:22:35.830 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:35.830 06:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 180541 00:22:36.766 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:22:36.766 06:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:36.766 06:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:36.766 06:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:36.766 06:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:36.766 06:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:36.766 06:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 176246 00:22:36.766 06:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 176246 ']' 00:22:36.766 06:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 176246 00:22:36.766 06:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:36.766 06:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:36.766 06:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 176246 00:22:36.766 06:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:36.766 06:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:36.766 06:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 176246' 00:22:36.766 killing process with pid 176246 00:22:36.766 06:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 176246 00:22:36.766 [2024-07-26 06:14:48.031838] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:36.766 06:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 176246 00:22:38.146 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:22:38.405 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:38.405 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:38.405 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:38.405 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:38.405 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:38.405 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:38.405 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:38.405 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:38.405 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:38.405 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.Cnb6qXGBWU 00:22:38.405 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:38.405 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.Cnb6qXGBWU 00:22:38.405 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:38.405 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:38.405 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:38.405 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.405 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=180959 00:22:38.405 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:38.405 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 180959 00:22:38.405 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 180959 ']' 00:22:38.405 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.405 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:38.405 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.405 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:38.405 06:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.405 [2024-07-26 06:14:49.651581] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:38.405 [2024-07-26 06:14:49.651726] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.405 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.663 [2024-07-26 06:14:49.783769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.920 [2024-07-26 06:14:50.038019] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.920 [2024-07-26 06:14:50.038099] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.920 [2024-07-26 06:14:50.038143] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.920 [2024-07-26 06:14:50.038166] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.920 [2024-07-26 06:14:50.038187] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.920 [2024-07-26 06:14:50.038229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.483 06:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:39.483 06:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:39.484 06:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:39.484 06:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:39.484 06:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.484 06:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.484 06:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.Cnb6qXGBWU 00:22:39.484 06:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Cnb6qXGBWU 00:22:39.484 06:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:39.741 [2024-07-26 06:14:50.852764] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.741 06:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:39.998 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:40.256 [2024-07-26 06:14:51.426471] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:40.256 [2024-07-26 06:14:51.426789] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.256 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:40.514 malloc0 00:22:40.514 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:40.772 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Cnb6qXGBWU 00:22:41.031 [2024-07-26 06:14:52.256514] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:41.031 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Cnb6qXGBWU 00:22:41.031 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:41.031 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:41.031 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:41.031 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Cnb6qXGBWU' 00:22:41.031 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:41.031 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=181269 00:22:41.031 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:41.031 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:41.031 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 181269 /var/tmp/bdevperf.sock 00:22:41.031 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 181269 ']' 00:22:41.031 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.031 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:41.031 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.031 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:41.031 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.031 [2024-07-26 06:14:52.358504] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:41.031 [2024-07-26 06:14:52.358640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181269 ] 00:22:41.289 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.289 [2024-07-26 06:14:52.481311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.549 [2024-07-26 06:14:52.706929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.115 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:42.115 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:42.115 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Cnb6qXGBWU 00:22:42.373 [2024-07-26 06:14:53.537111] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:42.373 [2024-07-26 06:14:53.537324] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:42.373 TLSTESTn1 00:22:42.373 06:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:42.631 Running I/O for 10 seconds... 00:22:52.622 00:22:52.622 Latency(us) 00:22:52.622 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.622 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:52.622 Verification LBA range: start 0x0 length 0x2000 00:22:52.622 TLSTESTn1 : 10.03 2713.44 10.60 0.00 0.00 47065.01 10728.49 57089.14 00:22:52.622 =================================================================================================================== 00:22:52.622 Total : 2713.44 10.60 0.00 0.00 47065.01 10728.49 57089.14 00:22:52.622 0 00:22:52.622 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:52.622 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 181269 00:22:52.622 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 181269 ']' 00:22:52.622 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 181269 00:22:52.622 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:52.622 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:52.622 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 181269 00:22:52.622 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:52.622 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:52.622 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 181269' 00:22:52.622 killing process with pid 181269 00:22:52.622 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 181269 00:22:52.622 Received shutdown signal, test time was about 10.000000 seconds 00:22:52.622 00:22:52.622 Latency(us) 00:22:52.622 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.622 =================================================================================================================== 00:22:52.622 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:52.622 [2024-07-26 06:15:03.852950] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:52.622 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 181269 00:22:53.588 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:22:53.588 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.Cnb6qXGBWU 00:22:53.588 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Cnb6qXGBWU 00:22:53.588 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:53.588 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Cnb6qXGBWU 00:22:53.588 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:53.588 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:53.588 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:53.588 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:53.588 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Cnb6qXGBWU 00:22:53.588 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:53.588 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:53.588 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:53.588 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Cnb6qXGBWU' 00:22:53.588 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:53.588 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=182874 00:22:53.588 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:53.588 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:53.588 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 182874 /var/tmp/bdevperf.sock 00:22:53.588 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 182874 ']' 00:22:53.588 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.588 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:53.588 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.588 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:53.588 06:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.848 [2024-07-26 06:15:04.926226] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:53.848 [2024-07-26 06:15:04.926396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182874 ] 00:22:53.848 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.848 [2024-07-26 06:15:05.068219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.107 [2024-07-26 06:15:05.313087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.673 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:54.673 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:54.673 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Cnb6qXGBWU 00:22:54.933 [2024-07-26 06:15:06.157512] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:54.933 [2024-07-26 06:15:06.157606] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:54.933 [2024-07-26 06:15:06.157628] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.Cnb6qXGBWU 00:22:54.933 request: 00:22:54.933 { 00:22:54.933 "name": "TLSTEST", 00:22:54.933 "trtype": "tcp", 00:22:54.933 "traddr": "10.0.0.2", 00:22:54.933 "adrfam": "ipv4", 00:22:54.933 "trsvcid": "4420", 00:22:54.933 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.933 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.933 "prchk_reftag": false, 00:22:54.933 "prchk_guard": false, 00:22:54.933 "hdgst": false, 00:22:54.933 "ddgst": false, 00:22:54.933 "psk": "/tmp/tmp.Cnb6qXGBWU", 00:22:54.933 "method": "bdev_nvme_attach_controller", 00:22:54.933 "req_id": 1 00:22:54.933 } 00:22:54.933 Got JSON-RPC error response 00:22:54.933 response: 00:22:54.933 { 00:22:54.933 "code": -1, 00:22:54.933 "message": "Operation not permitted" 00:22:54.933 } 00:22:54.933 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 182874 00:22:54.933 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 182874 ']' 00:22:54.933 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 182874 00:22:54.933 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:54.933 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:54.933 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 182874 00:22:54.933 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:54.933 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:54.933 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 182874' 00:22:54.933 killing process with pid 182874 00:22:54.933 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 182874 00:22:54.933 Received shutdown signal, test time was about 10.000000 seconds 00:22:54.933 00:22:54.933 Latency(us) 00:22:54.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.933 =================================================================================================================== 00:22:54.933 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:54.933 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 182874 00:22:55.867 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:22:55.867 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:55.867 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:55.867 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:55.867 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:55.867 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:55.867 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 180959 00:22:55.867 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 180959 ']' 00:22:55.867 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 180959 00:22:55.867 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:55.867 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:55.867 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 180959 00:22:55.867 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:55.867 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:55.867 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 180959' 00:22:55.867 killing process with pid 180959 00:22:55.867 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 180959 00:22:55.867 [2024-07-26 06:15:07.153026] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:55.867 06:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 180959 00:22:57.243 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:22:57.501 06:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:57.501 06:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:57.501 06:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:57.501 06:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.501 06:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=183710 00:22:57.501 06:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:57.501 06:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 183710 00:22:57.501 06:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 183710 ']' 00:22:57.501 06:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.501 06:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:57.501 06:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.501 06:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:57.501 06:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.501 [2024-07-26 06:15:08.711931] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:57.501 [2024-07-26 06:15:08.712093] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.501 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.760 [2024-07-26 06:15:08.849218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.018 [2024-07-26 06:15:09.105281] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.018 [2024-07-26 06:15:09.105376] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.018 [2024-07-26 06:15:09.105405] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.018 [2024-07-26 06:15:09.105431] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.018 [2024-07-26 06:15:09.105454] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.018 [2024-07-26 06:15:09.105521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.586 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:58.586 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:58.586 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:58.586 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:58.586 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.586 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.586 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.Cnb6qXGBWU 00:22:58.586 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:58.586 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Cnb6qXGBWU 00:22:58.586 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:22:58.586 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:58.586 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:22:58.586 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:58.586 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.Cnb6qXGBWU 00:22:58.586 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Cnb6qXGBWU 00:22:58.586 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:58.586 [2024-07-26 06:15:09.918948] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.844 06:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:59.102 06:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:59.102 [2024-07-26 06:15:10.420344] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:59.102 [2024-07-26 06:15:10.420693] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.359 06:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:59.616 malloc0 00:22:59.617 06:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:59.876 06:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Cnb6qXGBWU 00:22:59.876 [2024-07-26 06:15:11.194173] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:59.876 [2024-07-26 06:15:11.194239] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:59.876 [2024-07-26 06:15:11.194279] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:59.876 request: 00:22:59.876 { 00:22:59.876 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.876 "host": "nqn.2016-06.io.spdk:host1", 00:22:59.876 "psk": "/tmp/tmp.Cnb6qXGBWU", 00:22:59.876 "method": "nvmf_subsystem_add_host", 00:22:59.876 "req_id": 1 00:22:59.876 } 00:22:59.876 Got JSON-RPC error response 00:22:59.876 response: 00:22:59.876 { 00:22:59.876 "code": -32603, 00:22:59.876 "message": "Internal error" 00:22:59.876 } 00:23:00.135 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:00.135 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:00.135 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:00.135 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:00.135 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 183710 00:23:00.135 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 183710 ']' 00:23:00.135 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 183710 00:23:00.135 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:00.135 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:00.135 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 183710 00:23:00.135 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:00.135 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:00.135 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 183710' 00:23:00.135 killing process with pid 183710 00:23:00.135 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 183710 00:23:00.135 06:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 183710 00:23:01.515 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:23:01.515 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.Cnb6qXGBWU 00:23:01.515 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:01.515 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:01.515 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:01.515 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.515 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=184293 00:23:01.515 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:01.515 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 184293 00:23:01.515 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 184293 ']' 00:23:01.515 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.515 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:01.515 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.515 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:01.515 06:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.515 [2024-07-26 06:15:12.769282] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:01.515 [2024-07-26 06:15:12.769442] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.775 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.775 [2024-07-26 06:15:12.910699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.035 [2024-07-26 06:15:13.166585] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.035 [2024-07-26 06:15:13.166668] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.035 [2024-07-26 06:15:13.166698] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.035 [2024-07-26 06:15:13.166724] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.035 [2024-07-26 06:15:13.166747] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.035 [2024-07-26 06:15:13.166798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.600 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:02.600 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:02.600 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:02.600 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:02.600 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.600 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.600 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.Cnb6qXGBWU 00:23:02.600 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Cnb6qXGBWU 00:23:02.600 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:02.858 [2024-07-26 06:15:13.940893] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.858 06:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:03.116 06:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:03.116 [2024-07-26 06:15:14.418144] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:03.116 [2024-07-26 06:15:14.418487] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.116 06:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:03.374 malloc0 00:23:03.632 06:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:03.890 06:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Cnb6qXGBWU 00:23:03.890 [2024-07-26 06:15:15.191134] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:03.890 06:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=184701 00:23:03.891 06:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:03.891 06:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:03.891 06:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 184701 /var/tmp/bdevperf.sock 00:23:03.891 06:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 184701 ']' 00:23:03.891 06:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:03.891 06:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:03.891 06:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:03.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:03.891 06:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:03.891 06:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.150 [2024-07-26 06:15:15.286749] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:04.150 [2024-07-26 06:15:15.286905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid184701 ] 00:23:04.150 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.150 [2024-07-26 06:15:15.407591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.409 [2024-07-26 06:15:15.646618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.975 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:04.975 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:04.975 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Cnb6qXGBWU 00:23:05.233 [2024-07-26 06:15:16.429652] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:05.233 [2024-07-26 06:15:16.429852] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:05.233 TLSTESTn1 00:23:05.233 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:05.803 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:05.803 "subsystems": [ 00:23:05.803 { 00:23:05.803 "subsystem": "keyring", 00:23:05.803 "config": [] 00:23:05.803 }, 00:23:05.803 { 00:23:05.803 "subsystem": "iobuf", 00:23:05.803 "config": [ 00:23:05.803 { 00:23:05.803 "method": "iobuf_set_options", 00:23:05.803 "params": { 00:23:05.803 "small_pool_count": 8192, 00:23:05.803 "large_pool_count": 1024, 00:23:05.803 "small_bufsize": 8192, 00:23:05.803 "large_bufsize": 135168 00:23:05.803 } 00:23:05.803 } 00:23:05.803 ] 00:23:05.803 }, 00:23:05.803 { 00:23:05.803 "subsystem": "sock", 00:23:05.803 "config": [ 00:23:05.803 { 00:23:05.803 "method": "sock_set_default_impl", 00:23:05.803 "params": { 00:23:05.803 "impl_name": "posix" 00:23:05.803 } 00:23:05.803 }, 00:23:05.803 { 00:23:05.803 "method": "sock_impl_set_options", 00:23:05.803 "params": { 00:23:05.803 "impl_name": "ssl", 00:23:05.803 "recv_buf_size": 4096, 00:23:05.803 "send_buf_size": 4096, 00:23:05.803 "enable_recv_pipe": true, 00:23:05.803 "enable_quickack": false, 00:23:05.803 "enable_placement_id": 0, 00:23:05.803 "enable_zerocopy_send_server": true, 00:23:05.803 "enable_zerocopy_send_client": false, 00:23:05.803 "zerocopy_threshold": 0, 00:23:05.803 "tls_version": 0, 00:23:05.803 "enable_ktls": false 00:23:05.803 } 00:23:05.803 }, 00:23:05.803 { 00:23:05.803 "method": "sock_impl_set_options", 00:23:05.803 "params": { 00:23:05.803 "impl_name": "posix", 00:23:05.803 "recv_buf_size": 2097152, 00:23:05.803 "send_buf_size": 2097152, 00:23:05.803 "enable_recv_pipe": true, 00:23:05.803 "enable_quickack": false, 00:23:05.803 "enable_placement_id": 0, 00:23:05.803 "enable_zerocopy_send_server": true, 00:23:05.803 "enable_zerocopy_send_client": false, 00:23:05.803 "zerocopy_threshold": 0, 00:23:05.803 "tls_version": 0, 00:23:05.803 "enable_ktls": false 00:23:05.803 } 00:23:05.803 } 00:23:05.803 ] 00:23:05.803 }, 00:23:05.803 { 00:23:05.803 "subsystem": "vmd", 00:23:05.803 "config": [] 00:23:05.803 }, 00:23:05.803 { 00:23:05.803 "subsystem": "accel", 00:23:05.803 "config": [ 00:23:05.803 { 00:23:05.803 "method": "accel_set_options", 00:23:05.803 "params": { 00:23:05.803 "small_cache_size": 128, 00:23:05.803 "large_cache_size": 16, 00:23:05.803 "task_count": 2048, 00:23:05.803 "sequence_count": 2048, 00:23:05.803 "buf_count": 2048 00:23:05.803 } 00:23:05.803 } 00:23:05.803 ] 00:23:05.803 }, 00:23:05.803 { 00:23:05.803 "subsystem": "bdev", 00:23:05.803 "config": [ 00:23:05.803 { 00:23:05.803 "method": "bdev_set_options", 00:23:05.803 "params": { 00:23:05.803 "bdev_io_pool_size": 65535, 00:23:05.803 "bdev_io_cache_size": 256, 00:23:05.804 "bdev_auto_examine": true, 00:23:05.804 "iobuf_small_cache_size": 128, 00:23:05.804 "iobuf_large_cache_size": 16 00:23:05.804 } 00:23:05.804 }, 00:23:05.804 { 00:23:05.804 "method": "bdev_raid_set_options", 00:23:05.804 "params": { 00:23:05.804 "process_window_size_kb": 1024, 00:23:05.804 "process_max_bandwidth_mb_sec": 0 00:23:05.804 } 00:23:05.804 }, 00:23:05.804 { 00:23:05.804 "method": "bdev_iscsi_set_options", 00:23:05.804 "params": { 00:23:05.804 "timeout_sec": 30 00:23:05.804 } 00:23:05.804 }, 00:23:05.804 { 00:23:05.804 "method": "bdev_nvme_set_options", 00:23:05.804 "params": { 00:23:05.804 "action_on_timeout": "none", 00:23:05.804 "timeout_us": 0, 00:23:05.804 "timeout_admin_us": 0, 00:23:05.804 "keep_alive_timeout_ms": 10000, 00:23:05.804 "arbitration_burst": 0, 00:23:05.804 "low_priority_weight": 0, 00:23:05.804 "medium_priority_weight": 0, 00:23:05.804 "high_priority_weight": 0, 00:23:05.804 "nvme_adminq_poll_period_us": 10000, 00:23:05.804 "nvme_ioq_poll_period_us": 0, 00:23:05.804 "io_queue_requests": 0, 00:23:05.804 "delay_cmd_submit": true, 00:23:05.804 "transport_retry_count": 4, 00:23:05.804 "bdev_retry_count": 3, 00:23:05.804 "transport_ack_timeout": 0, 00:23:05.804 "ctrlr_loss_timeout_sec": 0, 00:23:05.804 "reconnect_delay_sec": 0, 00:23:05.804 "fast_io_fail_timeout_sec": 0, 00:23:05.804 "disable_auto_failback": false, 00:23:05.804 "generate_uuids": false, 00:23:05.804 "transport_tos": 0, 00:23:05.804 "nvme_error_stat": false, 00:23:05.804 "rdma_srq_size": 0, 00:23:05.804 "io_path_stat": false, 00:23:05.804 "allow_accel_sequence": false, 00:23:05.804 "rdma_max_cq_size": 0, 00:23:05.804 "rdma_cm_event_timeout_ms": 0, 00:23:05.804 "dhchap_digests": [ 00:23:05.804 "sha256", 00:23:05.804 "sha384", 00:23:05.804 "sha512" 00:23:05.804 ], 00:23:05.804 "dhchap_dhgroups": [ 00:23:05.804 "null", 00:23:05.804 "ffdhe2048", 00:23:05.804 "ffdhe3072", 00:23:05.804 "ffdhe4096", 00:23:05.804 "ffdhe6144", 00:23:05.804 "ffdhe8192" 00:23:05.804 ] 00:23:05.804 } 00:23:05.804 }, 00:23:05.804 { 00:23:05.804 "method": "bdev_nvme_set_hotplug", 00:23:05.804 "params": { 00:23:05.804 "period_us": 100000, 00:23:05.804 "enable": false 00:23:05.804 } 00:23:05.804 }, 00:23:05.804 { 00:23:05.804 "method": "bdev_malloc_create", 00:23:05.804 "params": { 00:23:05.804 "name": "malloc0", 00:23:05.804 "num_blocks": 8192, 00:23:05.804 "block_size": 4096, 00:23:05.804 "physical_block_size": 4096, 00:23:05.804 "uuid": "7a5a3464-cac6-494b-9cf8-42f8415cdfb9", 00:23:05.804 "optimal_io_boundary": 0, 00:23:05.804 "md_size": 0, 00:23:05.804 "dif_type": 0, 00:23:05.804 "dif_is_head_of_md": false, 00:23:05.804 "dif_pi_format": 0 00:23:05.804 } 00:23:05.804 }, 00:23:05.804 { 00:23:05.804 "method": "bdev_wait_for_examine" 00:23:05.804 } 00:23:05.804 ] 00:23:05.804 }, 00:23:05.804 { 00:23:05.804 "subsystem": "nbd", 00:23:05.804 "config": [] 00:23:05.804 }, 00:23:05.804 { 00:23:05.804 "subsystem": "scheduler", 00:23:05.804 "config": [ 00:23:05.804 { 00:23:05.804 "method": "framework_set_scheduler", 00:23:05.804 "params": { 00:23:05.804 "name": "static" 00:23:05.804 } 00:23:05.804 } 00:23:05.804 ] 00:23:05.804 }, 00:23:05.804 { 00:23:05.804 "subsystem": "nvmf", 00:23:05.804 "config": [ 00:23:05.804 { 00:23:05.804 "method": "nvmf_set_config", 00:23:05.804 "params": { 00:23:05.804 "discovery_filter": "match_any", 00:23:05.804 "admin_cmd_passthru": { 00:23:05.804 "identify_ctrlr": false 00:23:05.804 } 00:23:05.804 } 00:23:05.804 }, 00:23:05.804 { 00:23:05.804 "method": "nvmf_set_max_subsystems", 00:23:05.804 "params": { 00:23:05.804 "max_subsystems": 1024 00:23:05.804 } 00:23:05.804 }, 00:23:05.804 { 00:23:05.804 "method": "nvmf_set_crdt", 00:23:05.804 "params": { 00:23:05.804 "crdt1": 0, 00:23:05.804 "crdt2": 0, 00:23:05.804 "crdt3": 0 00:23:05.804 } 00:23:05.804 }, 00:23:05.804 { 00:23:05.804 "method": "nvmf_create_transport", 00:23:05.804 "params": { 00:23:05.804 "trtype": "TCP", 00:23:05.804 "max_queue_depth": 128, 00:23:05.804 "max_io_qpairs_per_ctrlr": 127, 00:23:05.804 "in_capsule_data_size": 4096, 00:23:05.804 "max_io_size": 131072, 00:23:05.804 "io_unit_size": 131072, 00:23:05.804 "max_aq_depth": 128, 00:23:05.804 "num_shared_buffers": 511, 00:23:05.804 "buf_cache_size": 4294967295, 00:23:05.804 "dif_insert_or_strip": false, 00:23:05.804 "zcopy": false, 00:23:05.804 "c2h_success": false, 00:23:05.804 "sock_priority": 0, 00:23:05.804 "abort_timeout_sec": 1, 00:23:05.804 "ack_timeout": 0, 00:23:05.804 "data_wr_pool_size": 0 00:23:05.804 } 00:23:05.804 }, 00:23:05.804 { 00:23:05.804 "method": "nvmf_create_subsystem", 00:23:05.804 "params": { 00:23:05.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.804 "allow_any_host": false, 00:23:05.804 "serial_number": "SPDK00000000000001", 00:23:05.804 "model_number": "SPDK bdev Controller", 00:23:05.804 "max_namespaces": 10, 00:23:05.804 "min_cntlid": 1, 00:23:05.804 "max_cntlid": 65519, 00:23:05.804 "ana_reporting": false 00:23:05.804 } 00:23:05.804 }, 00:23:05.804 { 00:23:05.804 "method": "nvmf_subsystem_add_host", 00:23:05.804 "params": { 00:23:05.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.804 "host": "nqn.2016-06.io.spdk:host1", 00:23:05.804 "psk": "/tmp/tmp.Cnb6qXGBWU" 00:23:05.804 } 00:23:05.804 }, 00:23:05.804 { 00:23:05.804 "method": "nvmf_subsystem_add_ns", 00:23:05.804 "params": { 00:23:05.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.804 "namespace": { 00:23:05.804 "nsid": 1, 00:23:05.804 "bdev_name": "malloc0", 00:23:05.804 "nguid": "7A5A3464CAC6494B9CF842F8415CDFB9", 00:23:05.804 "uuid": "7a5a3464-cac6-494b-9cf8-42f8415cdfb9", 00:23:05.804 "no_auto_visible": false 00:23:05.804 } 00:23:05.804 } 00:23:05.804 }, 00:23:05.804 { 00:23:05.804 "method": "nvmf_subsystem_add_listener", 00:23:05.804 "params": { 00:23:05.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.804 "listen_address": { 00:23:05.804 "trtype": "TCP", 00:23:05.804 "adrfam": "IPv4", 00:23:05.804 "traddr": "10.0.0.2", 00:23:05.804 "trsvcid": "4420" 00:23:05.804 }, 00:23:05.804 "secure_channel": true 00:23:05.804 } 00:23:05.804 } 00:23:05.804 ] 00:23:05.804 } 00:23:05.804 ] 00:23:05.804 }' 00:23:05.804 06:15:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:06.065 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:06.065 "subsystems": [ 00:23:06.065 { 00:23:06.065 "subsystem": "keyring", 00:23:06.065 "config": [] 00:23:06.065 }, 00:23:06.065 { 00:23:06.065 "subsystem": "iobuf", 00:23:06.065 "config": [ 00:23:06.065 { 00:23:06.065 "method": "iobuf_set_options", 00:23:06.065 "params": { 00:23:06.065 "small_pool_count": 8192, 00:23:06.065 "large_pool_count": 1024, 00:23:06.065 "small_bufsize": 8192, 00:23:06.065 "large_bufsize": 135168 00:23:06.065 } 00:23:06.065 } 00:23:06.065 ] 00:23:06.065 }, 00:23:06.065 { 00:23:06.065 "subsystem": "sock", 00:23:06.065 "config": [ 00:23:06.065 { 00:23:06.065 "method": "sock_set_default_impl", 00:23:06.065 "params": { 00:23:06.065 "impl_name": "posix" 00:23:06.065 } 00:23:06.065 }, 00:23:06.065 { 00:23:06.065 "method": "sock_impl_set_options", 00:23:06.065 "params": { 00:23:06.066 "impl_name": "ssl", 00:23:06.066 "recv_buf_size": 4096, 00:23:06.066 "send_buf_size": 4096, 00:23:06.066 "enable_recv_pipe": true, 00:23:06.066 "enable_quickack": false, 00:23:06.066 "enable_placement_id": 0, 00:23:06.066 "enable_zerocopy_send_server": true, 00:23:06.066 "enable_zerocopy_send_client": false, 00:23:06.066 "zerocopy_threshold": 0, 00:23:06.066 "tls_version": 0, 00:23:06.066 "enable_ktls": false 00:23:06.066 } 00:23:06.066 }, 00:23:06.066 { 00:23:06.066 "method": "sock_impl_set_options", 00:23:06.066 "params": { 00:23:06.066 "impl_name": "posix", 00:23:06.066 "recv_buf_size": 2097152, 00:23:06.066 "send_buf_size": 2097152, 00:23:06.066 "enable_recv_pipe": true, 00:23:06.066 "enable_quickack": false, 00:23:06.066 "enable_placement_id": 0, 00:23:06.066 "enable_zerocopy_send_server": true, 00:23:06.066 "enable_zerocopy_send_client": false, 00:23:06.066 "zerocopy_threshold": 0, 00:23:06.066 "tls_version": 0, 00:23:06.066 "enable_ktls": false 00:23:06.066 } 00:23:06.066 } 00:23:06.066 ] 00:23:06.066 }, 00:23:06.066 { 00:23:06.066 "subsystem": "vmd", 00:23:06.066 "config": [] 00:23:06.066 }, 00:23:06.066 { 00:23:06.066 "subsystem": "accel", 00:23:06.066 "config": [ 00:23:06.066 { 00:23:06.066 "method": "accel_set_options", 00:23:06.066 "params": { 00:23:06.066 "small_cache_size": 128, 00:23:06.066 "large_cache_size": 16, 00:23:06.066 "task_count": 2048, 00:23:06.066 "sequence_count": 2048, 00:23:06.066 "buf_count": 2048 00:23:06.066 } 00:23:06.066 } 00:23:06.066 ] 00:23:06.066 }, 00:23:06.066 { 00:23:06.066 "subsystem": "bdev", 00:23:06.066 "config": [ 00:23:06.066 { 00:23:06.066 "method": "bdev_set_options", 00:23:06.066 "params": { 00:23:06.066 "bdev_io_pool_size": 65535, 00:23:06.066 "bdev_io_cache_size": 256, 00:23:06.066 "bdev_auto_examine": true, 00:23:06.066 "iobuf_small_cache_size": 128, 00:23:06.066 "iobuf_large_cache_size": 16 00:23:06.066 } 00:23:06.066 }, 00:23:06.066 { 00:23:06.066 "method": "bdev_raid_set_options", 00:23:06.066 "params": { 00:23:06.066 "process_window_size_kb": 1024, 00:23:06.066 "process_max_bandwidth_mb_sec": 0 00:23:06.066 } 00:23:06.066 }, 00:23:06.066 { 00:23:06.066 "method": "bdev_iscsi_set_options", 00:23:06.066 "params": { 00:23:06.066 "timeout_sec": 30 00:23:06.066 } 00:23:06.066 }, 00:23:06.066 { 00:23:06.066 "method": "bdev_nvme_set_options", 00:23:06.066 "params": { 00:23:06.066 "action_on_timeout": "none", 00:23:06.066 "timeout_us": 0, 00:23:06.066 "timeout_admin_us": 0, 00:23:06.066 "keep_alive_timeout_ms": 10000, 00:23:06.066 "arbitration_burst": 0, 00:23:06.066 "low_priority_weight": 0, 00:23:06.066 "medium_priority_weight": 0, 00:23:06.066 "high_priority_weight": 0, 00:23:06.066 "nvme_adminq_poll_period_us": 10000, 00:23:06.066 "nvme_ioq_poll_period_us": 0, 00:23:06.066 "io_queue_requests": 512, 00:23:06.066 "delay_cmd_submit": true, 00:23:06.066 "transport_retry_count": 4, 00:23:06.066 "bdev_retry_count": 3, 00:23:06.066 "transport_ack_timeout": 0, 00:23:06.066 "ctrlr_loss_timeout_sec": 0, 00:23:06.066 "reconnect_delay_sec": 0, 00:23:06.066 "fast_io_fail_timeout_sec": 0, 00:23:06.066 "disable_auto_failback": false, 00:23:06.066 "generate_uuids": false, 00:23:06.066 "transport_tos": 0, 00:23:06.066 "nvme_error_stat": false, 00:23:06.066 "rdma_srq_size": 0, 00:23:06.066 "io_path_stat": false, 00:23:06.066 "allow_accel_sequence": false, 00:23:06.066 "rdma_max_cq_size": 0, 00:23:06.066 "rdma_cm_event_timeout_ms": 0, 00:23:06.066 "dhchap_digests": [ 00:23:06.066 "sha256", 00:23:06.066 "sha384", 00:23:06.066 "sha512" 00:23:06.066 ], 00:23:06.066 "dhchap_dhgroups": [ 00:23:06.066 "null", 00:23:06.066 "ffdhe2048", 00:23:06.066 "ffdhe3072", 00:23:06.066 "ffdhe4096", 00:23:06.066 "ffdhe6144", 00:23:06.066 "ffdhe8192" 00:23:06.066 ] 00:23:06.066 } 00:23:06.066 }, 00:23:06.066 { 00:23:06.066 "method": "bdev_nvme_attach_controller", 00:23:06.066 "params": { 00:23:06.066 "name": "TLSTEST", 00:23:06.066 "trtype": "TCP", 00:23:06.066 "adrfam": "IPv4", 00:23:06.066 "traddr": "10.0.0.2", 00:23:06.066 "trsvcid": "4420", 00:23:06.066 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:06.066 "prchk_reftag": false, 00:23:06.066 "prchk_guard": false, 00:23:06.066 "ctrlr_loss_timeout_sec": 0, 00:23:06.066 "reconnect_delay_sec": 0, 00:23:06.066 "fast_io_fail_timeout_sec": 0, 00:23:06.066 "psk": "/tmp/tmp.Cnb6qXGBWU", 00:23:06.066 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:06.066 "hdgst": false, 00:23:06.066 "ddgst": false 00:23:06.066 } 00:23:06.066 }, 00:23:06.066 { 00:23:06.066 "method": "bdev_nvme_set_hotplug", 00:23:06.066 "params": { 00:23:06.066 "period_us": 100000, 00:23:06.066 "enable": false 00:23:06.066 } 00:23:06.066 }, 00:23:06.066 { 00:23:06.066 "method": "bdev_wait_for_examine" 00:23:06.066 } 00:23:06.066 ] 00:23:06.066 }, 00:23:06.066 { 00:23:06.066 "subsystem": "nbd", 00:23:06.066 "config": [] 00:23:06.066 } 00:23:06.066 ] 00:23:06.066 }' 00:23:06.066 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 184701 00:23:06.066 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 184701 ']' 00:23:06.066 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 184701 00:23:06.066 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:06.066 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:06.066 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 184701 00:23:06.066 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:06.066 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:06.066 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 184701' 00:23:06.066 killing process with pid 184701 00:23:06.066 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 184701 00:23:06.066 Received shutdown signal, test time was about 10.000000 seconds 00:23:06.066 00:23:06.066 Latency(us) 00:23:06.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.066 =================================================================================================================== 00:23:06.066 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:06.066 [2024-07-26 06:15:17.192308] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:06.066 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 184701 00:23:07.039 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:23:07.040 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 184293 00:23:07.040 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 184293 ']' 00:23:07.040 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 184293 00:23:07.040 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:07.040 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:07.040 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 184293 00:23:07.040 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:07.040 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:07.040 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 184293' 00:23:07.040 killing process with pid 184293 00:23:07.040 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 184293 00:23:07.040 [2024-07-26 06:15:18.190835] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:07.040 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 184293 00:23:08.419 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:23:08.419 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:08.419 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:08.419 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:08.419 "subsystems": [ 00:23:08.419 { 00:23:08.419 "subsystem": "keyring", 00:23:08.419 "config": [] 00:23:08.419 }, 00:23:08.419 { 00:23:08.419 "subsystem": "iobuf", 00:23:08.419 "config": [ 00:23:08.419 { 00:23:08.419 "method": "iobuf_set_options", 00:23:08.419 "params": { 00:23:08.419 "small_pool_count": 8192, 00:23:08.419 "large_pool_count": 1024, 00:23:08.419 "small_bufsize": 8192, 00:23:08.419 "large_bufsize": 135168 00:23:08.419 } 00:23:08.420 } 00:23:08.420 ] 00:23:08.420 }, 00:23:08.420 { 00:23:08.420 "subsystem": "sock", 00:23:08.420 "config": [ 00:23:08.420 { 00:23:08.420 "method": "sock_set_default_impl", 00:23:08.420 "params": { 00:23:08.420 "impl_name": "posix" 00:23:08.420 } 00:23:08.420 }, 00:23:08.420 { 00:23:08.420 "method": "sock_impl_set_options", 00:23:08.420 "params": { 00:23:08.420 "impl_name": "ssl", 00:23:08.420 "recv_buf_size": 4096, 00:23:08.420 "send_buf_size": 4096, 00:23:08.420 "enable_recv_pipe": true, 00:23:08.420 "enable_quickack": false, 00:23:08.420 "enable_placement_id": 0, 00:23:08.420 "enable_zerocopy_send_server": true, 00:23:08.420 "enable_zerocopy_send_client": false, 00:23:08.420 "zerocopy_threshold": 0, 00:23:08.420 "tls_version": 0, 00:23:08.420 "enable_ktls": false 00:23:08.420 } 00:23:08.420 }, 00:23:08.420 { 00:23:08.420 "method": "sock_impl_set_options", 00:23:08.420 "params": { 00:23:08.420 "impl_name": "posix", 00:23:08.420 "recv_buf_size": 2097152, 00:23:08.420 "send_buf_size": 2097152, 00:23:08.420 "enable_recv_pipe": true, 00:23:08.420 "enable_quickack": false, 00:23:08.420 "enable_placement_id": 0, 00:23:08.420 "enable_zerocopy_send_server": true, 00:23:08.420 "enable_zerocopy_send_client": false, 00:23:08.420 "zerocopy_threshold": 0, 00:23:08.420 "tls_version": 0, 00:23:08.420 "enable_ktls": false 00:23:08.420 } 00:23:08.420 } 00:23:08.420 ] 00:23:08.420 }, 00:23:08.420 { 00:23:08.420 "subsystem": "vmd", 00:23:08.420 "config": [] 00:23:08.420 }, 00:23:08.420 { 00:23:08.420 "subsystem": "accel", 00:23:08.420 "config": [ 00:23:08.420 { 00:23:08.420 "method": "accel_set_options", 00:23:08.420 "params": { 00:23:08.420 "small_cache_size": 128, 00:23:08.420 "large_cache_size": 16, 00:23:08.420 "task_count": 2048, 00:23:08.420 "sequence_count": 2048, 00:23:08.420 "buf_count": 2048 00:23:08.420 } 00:23:08.420 } 00:23:08.420 ] 00:23:08.420 }, 00:23:08.420 { 00:23:08.420 "subsystem": "bdev", 00:23:08.420 "config": [ 00:23:08.420 { 00:23:08.420 "method": "bdev_set_options", 00:23:08.420 "params": { 00:23:08.420 "bdev_io_pool_size": 65535, 00:23:08.420 "bdev_io_cache_size": 256, 00:23:08.420 "bdev_auto_examine": true, 00:23:08.420 "iobuf_small_cache_size": 128, 00:23:08.420 "iobuf_large_cache_size": 16 00:23:08.420 } 00:23:08.420 }, 00:23:08.420 { 00:23:08.420 "method": "bdev_raid_set_options", 00:23:08.420 "params": { 00:23:08.420 "process_window_size_kb": 1024, 00:23:08.420 "process_max_bandwidth_mb_sec": 0 00:23:08.420 } 00:23:08.420 }, 00:23:08.420 { 00:23:08.420 "method": "bdev_iscsi_set_options", 00:23:08.420 "params": { 00:23:08.420 "timeout_sec": 30 00:23:08.420 } 00:23:08.420 }, 00:23:08.420 { 00:23:08.420 "method": "bdev_nvme_set_options", 00:23:08.420 "params": { 00:23:08.420 "action_on_timeout": "none", 00:23:08.420 "timeout_us": 0, 00:23:08.420 "timeout_admin_us": 0, 00:23:08.420 "keep_alive_timeout_ms": 10000, 00:23:08.420 "arbitration_burst": 0, 00:23:08.420 "low_priority_weight": 0, 00:23:08.420 "medium_priority_weight": 0, 00:23:08.420 "high_priority_weight": 0, 00:23:08.420 "nvme_adminq_poll_period_us": 10000, 00:23:08.420 "nvme_ioq_poll_period_us": 0, 00:23:08.420 "io_queue_requests": 0, 00:23:08.420 "delay_cmd_submit": true, 00:23:08.420 "transport_retry_count": 4, 00:23:08.420 "bdev_retry_count": 3, 00:23:08.420 "transport_ack_timeout": 0, 00:23:08.420 "ctrlr_loss_timeout_sec": 0, 00:23:08.420 "reconnect_delay_sec": 0, 00:23:08.420 "fast_io_fail_timeout_sec": 0, 00:23:08.420 "disable_auto_failback": false, 00:23:08.420 "generate_uuids": false, 00:23:08.420 "transport_tos": 0, 00:23:08.420 "nvme_error_stat": false, 00:23:08.420 "rdma_srq_size": 0, 00:23:08.420 "io_path_stat": false, 00:23:08.420 "allow_accel_sequence": false, 00:23:08.420 "rdma_max_cq_size": 0, 00:23:08.420 "rdma_cm_event_timeout_ms": 0, 00:23:08.420 "dhchap_digests": [ 00:23:08.420 "sha256", 00:23:08.420 "sha384", 00:23:08.420 "sha512" 00:23:08.420 ], 00:23:08.420 "dhchap_dhgroups": [ 00:23:08.420 "null", 00:23:08.420 "ffdhe2048", 00:23:08.420 "ffdhe3072", 00:23:08.420 "ffdhe4096", 00:23:08.420 "ffdhe6144", 00:23:08.420 "ffdhe8192" 00:23:08.420 ] 00:23:08.420 } 00:23:08.420 }, 00:23:08.420 { 00:23:08.420 "method": "bdev_nvme_set_hotplug", 00:23:08.420 "params": { 00:23:08.420 "period_us": 100000, 00:23:08.420 "enable": false 00:23:08.420 } 00:23:08.420 }, 00:23:08.420 { 00:23:08.420 "method": "bdev_malloc_create", 00:23:08.420 "params": { 00:23:08.421 "name": "malloc0", 00:23:08.421 "num_blocks": 8192, 00:23:08.421 "block_size": 4096, 00:23:08.421 "physical_block_size": 4096, 00:23:08.421 "uuid": "7a5a3464-cac6-494b-9cf8-42f8415cdfb9", 00:23:08.421 "optimal_io_boundary": 0, 00:23:08.421 "md_size": 0, 00:23:08.421 "dif_type": 0, 00:23:08.421 "dif_is_head_of_md": false, 00:23:08.421 "dif_pi_format": 0 00:23:08.421 } 00:23:08.421 }, 00:23:08.421 { 00:23:08.421 "method": "bdev_wait_for_examine" 00:23:08.421 } 00:23:08.421 ] 00:23:08.421 }, 00:23:08.421 { 00:23:08.421 "subsystem": "nbd", 00:23:08.421 "config": [] 00:23:08.421 }, 00:23:08.421 { 00:23:08.421 "subsystem": "scheduler", 00:23:08.421 "config": [ 00:23:08.421 { 00:23:08.421 "method": "framework_set_scheduler", 00:23:08.421 "params": { 00:23:08.421 "name": "static" 00:23:08.421 } 00:23:08.421 } 00:23:08.421 ] 00:23:08.421 }, 00:23:08.421 { 00:23:08.421 "subsystem": "nvmf", 00:23:08.421 "config": [ 00:23:08.421 { 00:23:08.421 "method": "nvmf_set_config", 00:23:08.421 "params": { 00:23:08.421 "discovery_filter": "match_any", 00:23:08.421 "admin_cmd_passthru": { 00:23:08.421 "identify_ctrlr": false 00:23:08.421 } 00:23:08.421 } 00:23:08.421 }, 00:23:08.421 { 00:23:08.421 "method": "nvmf_set_max_subsystems", 00:23:08.421 "params": { 00:23:08.421 "max_subsystems": 1024 00:23:08.421 } 00:23:08.421 }, 00:23:08.421 { 00:23:08.421 "method": "nvmf_set_crdt", 00:23:08.421 "params": { 00:23:08.421 "crdt1": 0, 00:23:08.421 "crdt2": 0, 00:23:08.421 "crdt3": 0 00:23:08.421 } 00:23:08.421 }, 00:23:08.421 { 00:23:08.421 "method": "nvmf_create_transport", 00:23:08.421 "params": { 00:23:08.421 "trtype": "TCP", 00:23:08.421 "max_queue_depth": 128, 00:23:08.421 "max_io_qpairs_per_ctrlr": 127, 00:23:08.421 "in_capsule_data_size": 4096, 00:23:08.421 "max_io_size": 131072, 00:23:08.421 "io_unit_size": 131072, 00:23:08.421 "max_aq_depth": 128, 00:23:08.421 "num_shared_buffers": 511, 00:23:08.421 "buf_cache_size": 4294967295, 00:23:08.421 "dif_insert_or_strip": false, 00:23:08.421 "zcopy": false, 00:23:08.421 "c2h_success": false, 00:23:08.421 "sock_priority": 0, 00:23:08.421 "abort_timeout_sec": 1, 00:23:08.421 "ack_timeout": 0, 00:23:08.421 "data_wr_pool_size": 0 00:23:08.421 } 00:23:08.421 }, 00:23:08.421 { 00:23:08.421 "method": "nvmf_create_subsystem", 00:23:08.421 "params": { 00:23:08.421 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.421 "allow_any_host": false, 00:23:08.421 "serial_number": "SPDK00000000000001", 00:23:08.421 "model_number": "SPDK bdev Controller", 00:23:08.421 "max_namespaces": 10, 00:23:08.421 "min_cntlid": 1, 00:23:08.421 "max_cntlid": 65519, 00:23:08.421 "ana_reporting": false 00:23:08.421 } 00:23:08.421 }, 00:23:08.421 { 00:23:08.421 "method": "nvmf_subsystem_add_host", 00:23:08.421 "params": { 00:23:08.421 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.421 "host": "nqn.2016-06.io.spdk:host1", 00:23:08.421 "psk": "/tmp/tmp.Cnb6qXGBWU" 00:23:08.421 } 00:23:08.421 }, 00:23:08.421 { 00:23:08.421 "method": "nvmf_subsystem_add_ns", 00:23:08.421 "params": { 00:23:08.421 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.421 "namespace": { 00:23:08.421 "nsid": 1, 00:23:08.421 "bdev_name": "malloc0", 00:23:08.421 "nguid": "7A5A3464CAC6494B9CF842F8415CDFB9", 00:23:08.421 "uuid": "7a5a3464-cac6-494b-9cf8-42f8415cdfb9", 00:23:08.421 "no_auto_visible": false 00:23:08.421 } 00:23:08.421 } 00:23:08.421 }, 00:23:08.421 { 00:23:08.421 "method": "nvmf_subsystem_add_listener", 00:23:08.421 "params": { 00:23:08.421 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.421 "listen_address": { 00:23:08.421 "trtype": "TCP", 00:23:08.421 "adrfam": "IPv4", 00:23:08.421 "traddr": "10.0.0.2", 00:23:08.421 "trsvcid": "4420" 00:23:08.421 }, 00:23:08.421 "secure_channel": true 00:23:08.421 } 00:23:08.421 } 00:23:08.421 ] 00:23:08.421 } 00:23:08.421 ] 00:23:08.421 }' 00:23:08.421 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:08.421 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.421 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=185246 00:23:08.421 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:08.421 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 185246 00:23:08.421 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 185246 ']' 00:23:08.421 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.421 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:08.422 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.422 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:08.422 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.422 [2024-07-26 06:15:19.714965] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:08.422 [2024-07-26 06:15:19.715138] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.680 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.680 [2024-07-26 06:15:19.852024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.939 [2024-07-26 06:15:20.108326] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.939 [2024-07-26 06:15:20.108413] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.939 [2024-07-26 06:15:20.108442] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.939 [2024-07-26 06:15:20.108468] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.939 [2024-07-26 06:15:20.108490] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.939 [2024-07-26 06:15:20.108636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.509 [2024-07-26 06:15:20.629868] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.509 [2024-07-26 06:15:20.645885] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:09.509 [2024-07-26 06:15:20.661900] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:09.509 [2024-07-26 06:15:20.662226] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.509 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:09.509 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:09.509 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:09.509 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:09.509 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.509 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.509 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=185396 00:23:09.509 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 185396 /var/tmp/bdevperf.sock 00:23:09.509 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 185396 ']' 00:23:09.509 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.509 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:09.509 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:09.509 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.509 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:09.509 "subsystems": [ 00:23:09.509 { 00:23:09.509 "subsystem": "keyring", 00:23:09.509 "config": [] 00:23:09.509 }, 00:23:09.509 { 00:23:09.509 "subsystem": "iobuf", 00:23:09.509 "config": [ 00:23:09.509 { 00:23:09.509 "method": "iobuf_set_options", 00:23:09.509 "params": { 00:23:09.509 "small_pool_count": 8192, 00:23:09.509 "large_pool_count": 1024, 00:23:09.509 "small_bufsize": 8192, 00:23:09.509 "large_bufsize": 135168 00:23:09.509 } 00:23:09.509 } 00:23:09.509 ] 00:23:09.509 }, 00:23:09.509 { 00:23:09.509 "subsystem": "sock", 00:23:09.509 "config": [ 00:23:09.509 { 00:23:09.509 "method": "sock_set_default_impl", 00:23:09.509 "params": { 00:23:09.509 "impl_name": "posix" 00:23:09.509 } 00:23:09.509 }, 00:23:09.509 { 00:23:09.509 "method": "sock_impl_set_options", 00:23:09.509 "params": { 00:23:09.509 "impl_name": "ssl", 00:23:09.509 "recv_buf_size": 4096, 00:23:09.509 "send_buf_size": 4096, 00:23:09.509 "enable_recv_pipe": true, 00:23:09.509 "enable_quickack": false, 00:23:09.509 "enable_placement_id": 0, 00:23:09.509 "enable_zerocopy_send_server": true, 00:23:09.509 "enable_zerocopy_send_client": false, 00:23:09.509 "zerocopy_threshold": 0, 00:23:09.509 "tls_version": 0, 00:23:09.509 "enable_ktls": false 00:23:09.509 } 00:23:09.509 }, 00:23:09.509 { 00:23:09.509 "method": "sock_impl_set_options", 00:23:09.509 "params": { 00:23:09.509 "impl_name": "posix", 00:23:09.509 "recv_buf_size": 2097152, 00:23:09.509 "send_buf_size": 2097152, 00:23:09.509 "enable_recv_pipe": true, 00:23:09.509 "enable_quickack": false, 00:23:09.509 "enable_placement_id": 0, 00:23:09.509 "enable_zerocopy_send_server": true, 00:23:09.509 "enable_zerocopy_send_client": false, 00:23:09.509 "zerocopy_threshold": 0, 00:23:09.509 "tls_version": 0, 00:23:09.509 "enable_ktls": false 00:23:09.509 } 00:23:09.509 } 00:23:09.509 ] 00:23:09.509 }, 00:23:09.509 { 00:23:09.509 "subsystem": "vmd", 00:23:09.509 "config": [] 00:23:09.509 }, 00:23:09.509 { 00:23:09.509 "subsystem": "accel", 00:23:09.509 "config": [ 00:23:09.509 { 00:23:09.509 "method": "accel_set_options", 00:23:09.509 "params": { 00:23:09.509 "small_cache_size": 128, 00:23:09.509 "large_cache_size": 16, 00:23:09.509 "task_count": 2048, 00:23:09.509 "sequence_count": 2048, 00:23:09.509 "buf_count": 2048 00:23:09.509 } 00:23:09.509 } 00:23:09.509 ] 00:23:09.509 }, 00:23:09.509 { 00:23:09.509 "subsystem": "bdev", 00:23:09.509 "config": [ 00:23:09.509 { 00:23:09.509 "method": "bdev_set_options", 00:23:09.509 "params": { 00:23:09.509 "bdev_io_pool_size": 65535, 00:23:09.509 "bdev_io_cache_size": 256, 00:23:09.509 "bdev_auto_examine": true, 00:23:09.509 "iobuf_small_cache_size": 128, 00:23:09.509 "iobuf_large_cache_size": 16 00:23:09.509 } 00:23:09.509 }, 00:23:09.509 { 00:23:09.509 "method": "bdev_raid_set_options", 00:23:09.509 "params": { 00:23:09.509 "process_window_size_kb": 1024, 00:23:09.509 "process_max_bandwidth_mb_sec": 0 00:23:09.509 } 00:23:09.509 }, 00:23:09.509 { 00:23:09.509 "method": "bdev_iscsi_set_options", 00:23:09.509 "params": { 00:23:09.509 "timeout_sec": 30 00:23:09.509 } 00:23:09.509 }, 00:23:09.509 { 00:23:09.509 "method": "bdev_nvme_set_options", 00:23:09.509 "params": { 00:23:09.509 "action_on_timeout": "none", 00:23:09.509 "timeout_us": 0, 00:23:09.509 "timeout_admin_us": 0, 00:23:09.509 "keep_alive_timeout_ms": 10000, 00:23:09.509 "arbitration_burst": 0, 00:23:09.509 "low_priority_weight": 0, 00:23:09.509 "medium_priority_weight": 0, 00:23:09.509 "high_priority_weight": 0, 00:23:09.509 "nvme_adminq_poll_period_us": 10000, 00:23:09.509 "nvme_ioq_poll_period_us": 0, 00:23:09.509 "io_queue_requests": 512, 00:23:09.509 "delay_cmd_submit": true, 00:23:09.509 "transport_retry_count": 4, 00:23:09.509 "bdev_retry_count": 3, 00:23:09.509 "transport_ack_timeout": 0, 00:23:09.509 "ctrlr_loss_timeout_sec": 0, 00:23:09.509 "reconnect_delay_sec": 0, 00:23:09.509 "fast_io_fail_timeout_sec": 0, 00:23:09.509 "disable_auto_failback": false, 00:23:09.509 "generate_uuids": false, 00:23:09.509 "transport_tos": 0, 00:23:09.509 "nvme_error_stat": false, 00:23:09.509 "rdma_srq_size": 0, 00:23:09.509 "io_path_stat": false, 00:23:09.509 "allow_accel_sequence": false, 00:23:09.509 "rdma_max_cq_size": 0, 00:23:09.509 "rdma_cm_event_timeout_ms": 0, 00:23:09.509 "dhchap_digests": [ 00:23:09.509 "sha256", 00:23:09.509 "sha384", 00:23:09.509 "sha512" 00:23:09.509 ], 00:23:09.509 "dhchap_dhgroups": [ 00:23:09.509 "null", 00:23:09.509 "ffdhe2048", 00:23:09.509 "ffdhe3072", 00:23:09.509 "ffdhe4096", 00:23:09.509 "ffdhe6144", 00:23:09.509 "ffdhe8192" 00:23:09.509 ] 00:23:09.509 } 00:23:09.509 }, 00:23:09.509 { 00:23:09.509 "method": "bdev_nvme_attach_controller", 00:23:09.509 "params": { 00:23:09.509 "name": "TLSTEST", 00:23:09.509 "trtype": "TCP", 00:23:09.509 "adrfam": "IPv4", 00:23:09.509 "traddr": "10.0.0.2", 00:23:09.509 "trsvcid": "4420", 00:23:09.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.509 "prchk_reftag": false, 00:23:09.509 "prchk_guard": false, 00:23:09.509 "ctrlr_loss_timeout_sec": 0, 00:23:09.509 "reconnect_delay_sec": 0, 00:23:09.509 "fast_io_fail_timeout_sec": 0, 00:23:09.509 "psk": "/tmp/tmp.Cnb6qXGBWU", 00:23:09.509 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:09.509 "hdgst": false, 00:23:09.509 "ddgst": false 00:23:09.509 } 00:23:09.509 }, 00:23:09.509 { 00:23:09.510 "method": "bdev_nvme_set_hotplug", 00:23:09.510 "params": { 00:23:09.510 "period_us": 100000, 00:23:09.510 "enable": false 00:23:09.510 } 00:23:09.510 }, 00:23:09.510 { 00:23:09.510 "method": "bdev_wait_for_examine" 00:23:09.510 } 00:23:09.510 ] 00:23:09.510 }, 00:23:09.510 { 00:23:09.510 "subsystem": "nbd", 00:23:09.510 "config": [] 00:23:09.510 } 00:23:09.510 ] 00:23:09.510 }' 00:23:09.510 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:09.510 06:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.510 [2024-07-26 06:15:20.788451] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:09.510 [2024-07-26 06:15:20.788605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid185396 ] 00:23:09.770 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.770 [2024-07-26 06:15:20.909808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.030 [2024-07-26 06:15:21.146206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.288 [2024-07-26 06:15:21.540864] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:10.288 [2024-07-26 06:15:21.541054] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:10.547 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:10.547 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:10.547 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:10.547 Running I/O for 10 seconds... 00:23:22.761 00:23:22.761 Latency(us) 00:23:22.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.761 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:22.761 Verification LBA range: start 0x0 length 0x2000 00:23:22.761 TLSTESTn1 : 10.05 1422.13 5.56 0.00 0.00 89859.89 17961.72 82332.63 00:23:22.761 =================================================================================================================== 00:23:22.761 Total : 1422.13 5.56 0.00 0.00 89859.89 17961.72 82332.63 00:23:22.761 0 00:23:22.761 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:22.762 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 185396 00:23:22.762 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 185396 ']' 00:23:22.762 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 185396 00:23:22.762 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:22.762 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:22.762 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 185396 00:23:22.762 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:22.762 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:22.762 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 185396' 00:23:22.762 killing process with pid 185396 00:23:22.762 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 185396 00:23:22.762 Received shutdown signal, test time was about 10.000000 seconds 00:23:22.762 00:23:22.762 Latency(us) 00:23:22.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.762 =================================================================================================================== 00:23:22.762 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:22.762 [2024-07-26 06:15:31.932210] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:22.762 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 185396 00:23:22.762 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:23:22.762 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 185246 00:23:22.762 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 185246 ']' 00:23:22.762 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 185246 00:23:22.762 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:22.762 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:22.762 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 185246 00:23:22.762 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:22.762 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:22.762 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 185246' 00:23:22.762 killing process with pid 185246 00:23:22.762 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 185246 00:23:22.762 [2024-07-26 06:15:32.948745] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:22.762 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 185246 00:23:23.020 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:23:23.278 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:23.278 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:23.278 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:23.278 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.278 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=186986 00:23:23.278 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:23.278 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 186986 00:23:23.278 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 186986 ']' 00:23:23.278 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.278 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:23.278 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.278 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:23.278 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.278 [2024-07-26 06:15:34.456682] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:23.278 [2024-07-26 06:15:34.456825] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.278 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.278 [2024-07-26 06:15:34.589287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.536 [2024-07-26 06:15:34.839563] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.536 [2024-07-26 06:15:34.839644] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.536 [2024-07-26 06:15:34.839684] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.536 [2024-07-26 06:15:34.839711] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.536 [2024-07-26 06:15:34.839734] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.536 [2024-07-26 06:15:34.839788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.101 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:24.101 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:24.101 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:24.101 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:24.101 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.101 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.101 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.Cnb6qXGBWU 00:23:24.101 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Cnb6qXGBWU 00:23:24.101 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:24.359 [2024-07-26 06:15:35.611072] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.359 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:24.616 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:24.874 [2024-07-26 06:15:36.104473] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:24.874 [2024-07-26 06:15:36.104831] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.874 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:25.134 malloc0 00:23:25.134 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:25.392 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Cnb6qXGBWU 00:23:25.650 [2024-07-26 06:15:36.943551] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:25.650 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=187274 00:23:25.650 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:25.650 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:25.650 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 187274 /var/tmp/bdevperf.sock 00:23:25.650 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 187274 ']' 00:23:25.650 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:25.650 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:25.650 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:25.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:25.650 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:25.650 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.909 [2024-07-26 06:15:37.043442] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:25.909 [2024-07-26 06:15:37.043598] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid187274 ] 00:23:25.909 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.909 [2024-07-26 06:15:37.172886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.169 [2024-07-26 06:15:37.429933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.733 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:26.733 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:26.733 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Cnb6qXGBWU 00:23:26.990 06:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:27.249 [2024-07-26 06:15:38.523677] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:27.508 nvme0n1 00:23:27.508 06:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:27.508 Running I/O for 1 seconds... 00:23:28.891 00:23:28.891 Latency(us) 00:23:28.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.891 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:28.891 Verification LBA range: start 0x0 length 0x2000 00:23:28.891 nvme0n1 : 1.03 2623.94 10.25 0.00 0.00 48063.80 9611.95 50875.35 00:23:28.891 =================================================================================================================== 00:23:28.891 Total : 2623.94 10.25 0.00 0.00 48063.80 9611.95 50875.35 00:23:28.891 0 00:23:28.891 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 187274 00:23:28.891 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 187274 ']' 00:23:28.891 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 187274 00:23:28.891 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:28.891 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:28.891 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 187274 00:23:28.891 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:28.891 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:28.891 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 187274' 00:23:28.891 killing process with pid 187274 00:23:28.891 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 187274 00:23:28.891 Received shutdown signal, test time was about 1.000000 seconds 00:23:28.891 00:23:28.891 Latency(us) 00:23:28.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.891 =================================================================================================================== 00:23:28.891 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.891 06:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 187274 00:23:29.491 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:23:29.754 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 186986 00:23:29.754 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 186986 ']' 00:23:29.754 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 186986 00:23:29.754 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:29.754 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:29.754 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 186986 00:23:29.754 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:29.754 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:29.754 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 186986' 00:23:29.754 killing process with pid 186986 00:23:29.754 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 186986 00:23:29.754 [2024-07-26 06:15:40.884376] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:29.754 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 186986 00:23:31.136 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:23:31.136 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:23:31.136 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:31.136 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:31.136 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.136 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=187871 00:23:31.136 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:31.136 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 187871 00:23:31.136 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 187871 ']' 00:23:31.136 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.136 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:31.136 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.136 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:31.136 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.136 [2024-07-26 06:15:42.308421] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:31.136 [2024-07-26 06:15:42.308576] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.136 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.136 [2024-07-26 06:15:42.455839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.396 [2024-07-26 06:15:42.712851] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:31.396 [2024-07-26 06:15:42.712944] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:31.396 [2024-07-26 06:15:42.712973] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:31.396 [2024-07-26 06:15:42.713009] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:31.396 [2024-07-26 06:15:42.713031] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:31.396 [2024-07-26 06:15:42.713094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.963 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:31.963 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:31.963 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:31.963 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:31.963 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.963 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.963 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:23:31.963 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.963 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.963 [2024-07-26 06:15:43.274923] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.230 malloc0 00:23:32.230 [2024-07-26 06:15:43.350607] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:32.230 [2024-07-26 06:15:43.350993] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.230 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.230 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=188076 00:23:32.230 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:32.230 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 188076 /var/tmp/bdevperf.sock 00:23:32.230 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 188076 ']' 00:23:32.230 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.230 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:32.230 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.230 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:32.230 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.230 [2024-07-26 06:15:43.457835] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:32.230 [2024-07-26 06:15:43.457986] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid188076 ] 00:23:32.230 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.493 [2024-07-26 06:15:43.585850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.752 [2024-07-26 06:15:43.842364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.318 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:33.318 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:33.318 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Cnb6qXGBWU 00:23:33.318 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:33.578 [2024-07-26 06:15:44.866132] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:33.838 nvme0n1 00:23:33.838 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:33.838 Running I/O for 1 seconds... 00:23:35.219 00:23:35.219 Latency(us) 00:23:35.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.219 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:35.219 Verification LBA range: start 0x0 length 0x2000 00:23:35.219 nvme0n1 : 1.03 2592.64 10.13 0.00 0.00 48639.35 9077.95 53593.88 00:23:35.219 =================================================================================================================== 00:23:35.219 Total : 2592.64 10.13 0.00 0.00 48639.35 9077.95 53593.88 00:23:35.219 0 00:23:35.219 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:23:35.219 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.219 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.219 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.219 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:23:35.219 "subsystems": [ 00:23:35.219 { 00:23:35.219 "subsystem": "keyring", 00:23:35.219 "config": [ 00:23:35.219 { 00:23:35.219 "method": "keyring_file_add_key", 00:23:35.219 "params": { 00:23:35.219 "name": "key0", 00:23:35.219 "path": "/tmp/tmp.Cnb6qXGBWU" 00:23:35.219 } 00:23:35.219 } 00:23:35.219 ] 00:23:35.219 }, 00:23:35.219 { 00:23:35.219 "subsystem": "iobuf", 00:23:35.219 "config": [ 00:23:35.219 { 00:23:35.219 "method": "iobuf_set_options", 00:23:35.219 "params": { 00:23:35.219 "small_pool_count": 8192, 00:23:35.219 "large_pool_count": 1024, 00:23:35.219 "small_bufsize": 8192, 00:23:35.219 "large_bufsize": 135168 00:23:35.219 } 00:23:35.219 } 00:23:35.219 ] 00:23:35.219 }, 00:23:35.219 { 00:23:35.219 "subsystem": "sock", 00:23:35.219 "config": [ 00:23:35.219 { 00:23:35.219 "method": "sock_set_default_impl", 00:23:35.219 "params": { 00:23:35.219 "impl_name": "posix" 00:23:35.219 } 00:23:35.219 }, 00:23:35.219 { 00:23:35.219 "method": "sock_impl_set_options", 00:23:35.219 "params": { 00:23:35.219 "impl_name": "ssl", 00:23:35.219 "recv_buf_size": 4096, 00:23:35.219 "send_buf_size": 4096, 00:23:35.219 "enable_recv_pipe": true, 00:23:35.219 "enable_quickack": false, 00:23:35.219 "enable_placement_id": 0, 00:23:35.219 "enable_zerocopy_send_server": true, 00:23:35.219 "enable_zerocopy_send_client": false, 00:23:35.219 "zerocopy_threshold": 0, 00:23:35.219 "tls_version": 0, 00:23:35.219 "enable_ktls": false 00:23:35.219 } 00:23:35.219 }, 00:23:35.219 { 00:23:35.219 "method": "sock_impl_set_options", 00:23:35.219 "params": { 00:23:35.219 "impl_name": "posix", 00:23:35.219 "recv_buf_size": 2097152, 00:23:35.219 "send_buf_size": 2097152, 00:23:35.219 "enable_recv_pipe": true, 00:23:35.219 "enable_quickack": false, 00:23:35.219 "enable_placement_id": 0, 00:23:35.219 "enable_zerocopy_send_server": true, 00:23:35.220 "enable_zerocopy_send_client": false, 00:23:35.220 "zerocopy_threshold": 0, 00:23:35.220 "tls_version": 0, 00:23:35.220 "enable_ktls": false 00:23:35.220 } 00:23:35.220 } 00:23:35.220 ] 00:23:35.220 }, 00:23:35.220 { 00:23:35.220 "subsystem": "vmd", 00:23:35.220 "config": [] 00:23:35.220 }, 00:23:35.220 { 00:23:35.220 "subsystem": "accel", 00:23:35.220 "config": [ 00:23:35.220 { 00:23:35.220 "method": "accel_set_options", 00:23:35.220 "params": { 00:23:35.220 "small_cache_size": 128, 00:23:35.220 "large_cache_size": 16, 00:23:35.220 "task_count": 2048, 00:23:35.220 "sequence_count": 2048, 00:23:35.220 "buf_count": 2048 00:23:35.220 } 00:23:35.220 } 00:23:35.220 ] 00:23:35.220 }, 00:23:35.220 { 00:23:35.220 "subsystem": "bdev", 00:23:35.220 "config": [ 00:23:35.220 { 00:23:35.220 "method": "bdev_set_options", 00:23:35.220 "params": { 00:23:35.220 "bdev_io_pool_size": 65535, 00:23:35.220 "bdev_io_cache_size": 256, 00:23:35.220 "bdev_auto_examine": true, 00:23:35.220 "iobuf_small_cache_size": 128, 00:23:35.220 "iobuf_large_cache_size": 16 00:23:35.220 } 00:23:35.220 }, 00:23:35.220 { 00:23:35.220 "method": "bdev_raid_set_options", 00:23:35.220 "params": { 00:23:35.220 "process_window_size_kb": 1024, 00:23:35.220 "process_max_bandwidth_mb_sec": 0 00:23:35.220 } 00:23:35.220 }, 00:23:35.220 { 00:23:35.220 "method": "bdev_iscsi_set_options", 00:23:35.220 "params": { 00:23:35.220 "timeout_sec": 30 00:23:35.220 } 00:23:35.220 }, 00:23:35.220 { 00:23:35.220 "method": "bdev_nvme_set_options", 00:23:35.220 "params": { 00:23:35.220 "action_on_timeout": "none", 00:23:35.220 "timeout_us": 0, 00:23:35.220 "timeout_admin_us": 0, 00:23:35.220 "keep_alive_timeout_ms": 10000, 00:23:35.220 "arbitration_burst": 0, 00:23:35.220 "low_priority_weight": 0, 00:23:35.220 "medium_priority_weight": 0, 00:23:35.220 "high_priority_weight": 0, 00:23:35.220 "nvme_adminq_poll_period_us": 10000, 00:23:35.220 "nvme_ioq_poll_period_us": 0, 00:23:35.220 "io_queue_requests": 0, 00:23:35.220 "delay_cmd_submit": true, 00:23:35.220 "transport_retry_count": 4, 00:23:35.220 "bdev_retry_count": 3, 00:23:35.220 "transport_ack_timeout": 0, 00:23:35.220 "ctrlr_loss_timeout_sec": 0, 00:23:35.220 "reconnect_delay_sec": 0, 00:23:35.220 "fast_io_fail_timeout_sec": 0, 00:23:35.220 "disable_auto_failback": false, 00:23:35.220 "generate_uuids": false, 00:23:35.220 "transport_tos": 0, 00:23:35.220 "nvme_error_stat": false, 00:23:35.220 "rdma_srq_size": 0, 00:23:35.220 "io_path_stat": false, 00:23:35.220 "allow_accel_sequence": false, 00:23:35.220 "rdma_max_cq_size": 0, 00:23:35.220 "rdma_cm_event_timeout_ms": 0, 00:23:35.220 "dhchap_digests": [ 00:23:35.220 "sha256", 00:23:35.220 "sha384", 00:23:35.220 "sha512" 00:23:35.220 ], 00:23:35.220 "dhchap_dhgroups": [ 00:23:35.220 "null", 00:23:35.220 "ffdhe2048", 00:23:35.220 "ffdhe3072", 00:23:35.220 "ffdhe4096", 00:23:35.220 "ffdhe6144", 00:23:35.220 "ffdhe8192" 00:23:35.220 ] 00:23:35.220 } 00:23:35.220 }, 00:23:35.220 { 00:23:35.220 "method": "bdev_nvme_set_hotplug", 00:23:35.220 "params": { 00:23:35.220 "period_us": 100000, 00:23:35.220 "enable": false 00:23:35.220 } 00:23:35.220 }, 00:23:35.220 { 00:23:35.220 "method": "bdev_malloc_create", 00:23:35.220 "params": { 00:23:35.220 "name": "malloc0", 00:23:35.220 "num_blocks": 8192, 00:23:35.220 "block_size": 4096, 00:23:35.220 "physical_block_size": 4096, 00:23:35.220 "uuid": "291e35e5-7311-4ec0-bfac-f3fc4026ed64", 00:23:35.220 "optimal_io_boundary": 0, 00:23:35.220 "md_size": 0, 00:23:35.220 "dif_type": 0, 00:23:35.220 "dif_is_head_of_md": false, 00:23:35.220 "dif_pi_format": 0 00:23:35.220 } 00:23:35.220 }, 00:23:35.220 { 00:23:35.220 "method": "bdev_wait_for_examine" 00:23:35.220 } 00:23:35.220 ] 00:23:35.220 }, 00:23:35.220 { 00:23:35.220 "subsystem": "nbd", 00:23:35.220 "config": [] 00:23:35.220 }, 00:23:35.220 { 00:23:35.220 "subsystem": "scheduler", 00:23:35.220 "config": [ 00:23:35.220 { 00:23:35.220 "method": "framework_set_scheduler", 00:23:35.220 "params": { 00:23:35.220 "name": "static" 00:23:35.220 } 00:23:35.220 } 00:23:35.220 ] 00:23:35.220 }, 00:23:35.220 { 00:23:35.220 "subsystem": "nvmf", 00:23:35.220 "config": [ 00:23:35.220 { 00:23:35.220 "method": "nvmf_set_config", 00:23:35.220 "params": { 00:23:35.220 "discovery_filter": "match_any", 00:23:35.220 "admin_cmd_passthru": { 00:23:35.220 "identify_ctrlr": false 00:23:35.220 } 00:23:35.220 } 00:23:35.220 }, 00:23:35.220 { 00:23:35.220 "method": "nvmf_set_max_subsystems", 00:23:35.220 "params": { 00:23:35.220 "max_subsystems": 1024 00:23:35.220 } 00:23:35.220 }, 00:23:35.220 { 00:23:35.220 "method": "nvmf_set_crdt", 00:23:35.220 "params": { 00:23:35.220 "crdt1": 0, 00:23:35.220 "crdt2": 0, 00:23:35.220 "crdt3": 0 00:23:35.220 } 00:23:35.220 }, 00:23:35.220 { 00:23:35.220 "method": "nvmf_create_transport", 00:23:35.220 "params": { 00:23:35.220 "trtype": "TCP", 00:23:35.220 "max_queue_depth": 128, 00:23:35.220 "max_io_qpairs_per_ctrlr": 127, 00:23:35.220 "in_capsule_data_size": 4096, 00:23:35.220 "max_io_size": 131072, 00:23:35.220 "io_unit_size": 131072, 00:23:35.220 "max_aq_depth": 128, 00:23:35.220 "num_shared_buffers": 511, 00:23:35.220 "buf_cache_size": 4294967295, 00:23:35.220 "dif_insert_or_strip": false, 00:23:35.220 "zcopy": false, 00:23:35.220 "c2h_success": false, 00:23:35.220 "sock_priority": 0, 00:23:35.220 "abort_timeout_sec": 1, 00:23:35.220 "ack_timeout": 0, 00:23:35.220 "data_wr_pool_size": 0 00:23:35.220 } 00:23:35.220 }, 00:23:35.220 { 00:23:35.220 "method": "nvmf_create_subsystem", 00:23:35.220 "params": { 00:23:35.220 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.220 "allow_any_host": false, 00:23:35.220 "serial_number": "00000000000000000000", 00:23:35.220 "model_number": "SPDK bdev Controller", 00:23:35.220 "max_namespaces": 32, 00:23:35.220 "min_cntlid": 1, 00:23:35.220 "max_cntlid": 65519, 00:23:35.220 "ana_reporting": false 00:23:35.220 } 00:23:35.220 }, 00:23:35.220 { 00:23:35.220 "method": "nvmf_subsystem_add_host", 00:23:35.220 "params": { 00:23:35.220 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.220 "host": "nqn.2016-06.io.spdk:host1", 00:23:35.220 "psk": "key0" 00:23:35.220 } 00:23:35.220 }, 00:23:35.220 { 00:23:35.220 "method": "nvmf_subsystem_add_ns", 00:23:35.220 "params": { 00:23:35.220 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.220 "namespace": { 00:23:35.220 "nsid": 1, 00:23:35.220 "bdev_name": "malloc0", 00:23:35.220 "nguid": "291E35E573114EC0BFACF3FC4026ED64", 00:23:35.220 "uuid": "291e35e5-7311-4ec0-bfac-f3fc4026ed64", 00:23:35.220 "no_auto_visible": false 00:23:35.220 } 00:23:35.220 } 00:23:35.220 }, 00:23:35.220 { 00:23:35.220 "method": "nvmf_subsystem_add_listener", 00:23:35.220 "params": { 00:23:35.220 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.220 "listen_address": { 00:23:35.220 "trtype": "TCP", 00:23:35.220 "adrfam": "IPv4", 00:23:35.220 "traddr": "10.0.0.2", 00:23:35.220 "trsvcid": "4420" 00:23:35.220 }, 00:23:35.220 "secure_channel": false, 00:23:35.220 "sock_impl": "ssl" 00:23:35.220 } 00:23:35.220 } 00:23:35.220 ] 00:23:35.220 } 00:23:35.220 ] 00:23:35.220 }' 00:23:35.220 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:35.479 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:23:35.479 "subsystems": [ 00:23:35.479 { 00:23:35.479 "subsystem": "keyring", 00:23:35.479 "config": [ 00:23:35.479 { 00:23:35.479 "method": "keyring_file_add_key", 00:23:35.479 "params": { 00:23:35.479 "name": "key0", 00:23:35.479 "path": "/tmp/tmp.Cnb6qXGBWU" 00:23:35.479 } 00:23:35.479 } 00:23:35.479 ] 00:23:35.479 }, 00:23:35.479 { 00:23:35.479 "subsystem": "iobuf", 00:23:35.479 "config": [ 00:23:35.479 { 00:23:35.479 "method": "iobuf_set_options", 00:23:35.479 "params": { 00:23:35.479 "small_pool_count": 8192, 00:23:35.479 "large_pool_count": 1024, 00:23:35.479 "small_bufsize": 8192, 00:23:35.479 "large_bufsize": 135168 00:23:35.479 } 00:23:35.479 } 00:23:35.479 ] 00:23:35.479 }, 00:23:35.479 { 00:23:35.479 "subsystem": "sock", 00:23:35.479 "config": [ 00:23:35.480 { 00:23:35.480 "method": "sock_set_default_impl", 00:23:35.480 "params": { 00:23:35.480 "impl_name": "posix" 00:23:35.480 } 00:23:35.480 }, 00:23:35.480 { 00:23:35.480 "method": "sock_impl_set_options", 00:23:35.480 "params": { 00:23:35.480 "impl_name": "ssl", 00:23:35.480 "recv_buf_size": 4096, 00:23:35.480 "send_buf_size": 4096, 00:23:35.480 "enable_recv_pipe": true, 00:23:35.480 "enable_quickack": false, 00:23:35.480 "enable_placement_id": 0, 00:23:35.480 "enable_zerocopy_send_server": true, 00:23:35.480 "enable_zerocopy_send_client": false, 00:23:35.480 "zerocopy_threshold": 0, 00:23:35.480 "tls_version": 0, 00:23:35.480 "enable_ktls": false 00:23:35.480 } 00:23:35.480 }, 00:23:35.480 { 00:23:35.480 "method": "sock_impl_set_options", 00:23:35.480 "params": { 00:23:35.480 "impl_name": "posix", 00:23:35.480 "recv_buf_size": 2097152, 00:23:35.480 "send_buf_size": 2097152, 00:23:35.480 "enable_recv_pipe": true, 00:23:35.480 "enable_quickack": false, 00:23:35.480 "enable_placement_id": 0, 00:23:35.480 "enable_zerocopy_send_server": true, 00:23:35.480 "enable_zerocopy_send_client": false, 00:23:35.480 "zerocopy_threshold": 0, 00:23:35.480 "tls_version": 0, 00:23:35.480 "enable_ktls": false 00:23:35.480 } 00:23:35.480 } 00:23:35.480 ] 00:23:35.480 }, 00:23:35.480 { 00:23:35.480 "subsystem": "vmd", 00:23:35.480 "config": [] 00:23:35.480 }, 00:23:35.480 { 00:23:35.480 "subsystem": "accel", 00:23:35.480 "config": [ 00:23:35.480 { 00:23:35.480 "method": "accel_set_options", 00:23:35.480 "params": { 00:23:35.480 "small_cache_size": 128, 00:23:35.480 "large_cache_size": 16, 00:23:35.480 "task_count": 2048, 00:23:35.480 "sequence_count": 2048, 00:23:35.480 "buf_count": 2048 00:23:35.480 } 00:23:35.480 } 00:23:35.480 ] 00:23:35.480 }, 00:23:35.480 { 00:23:35.480 "subsystem": "bdev", 00:23:35.480 "config": [ 00:23:35.480 { 00:23:35.480 "method": "bdev_set_options", 00:23:35.480 "params": { 00:23:35.480 "bdev_io_pool_size": 65535, 00:23:35.480 "bdev_io_cache_size": 256, 00:23:35.480 "bdev_auto_examine": true, 00:23:35.480 "iobuf_small_cache_size": 128, 00:23:35.480 "iobuf_large_cache_size": 16 00:23:35.480 } 00:23:35.480 }, 00:23:35.480 { 00:23:35.480 "method": "bdev_raid_set_options", 00:23:35.480 "params": { 00:23:35.480 "process_window_size_kb": 1024, 00:23:35.480 "process_max_bandwidth_mb_sec": 0 00:23:35.480 } 00:23:35.480 }, 00:23:35.480 { 00:23:35.480 "method": "bdev_iscsi_set_options", 00:23:35.480 "params": { 00:23:35.480 "timeout_sec": 30 00:23:35.480 } 00:23:35.480 }, 00:23:35.480 { 00:23:35.480 "method": "bdev_nvme_set_options", 00:23:35.480 "params": { 00:23:35.480 "action_on_timeout": "none", 00:23:35.480 "timeout_us": 0, 00:23:35.480 "timeout_admin_us": 0, 00:23:35.480 "keep_alive_timeout_ms": 10000, 00:23:35.480 "arbitration_burst": 0, 00:23:35.480 "low_priority_weight": 0, 00:23:35.480 "medium_priority_weight": 0, 00:23:35.480 "high_priority_weight": 0, 00:23:35.480 "nvme_adminq_poll_period_us": 10000, 00:23:35.480 "nvme_ioq_poll_period_us": 0, 00:23:35.480 "io_queue_requests": 512, 00:23:35.480 "delay_cmd_submit": true, 00:23:35.480 "transport_retry_count": 4, 00:23:35.480 "bdev_retry_count": 3, 00:23:35.480 "transport_ack_timeout": 0, 00:23:35.480 "ctrlr_loss_timeout_sec": 0, 00:23:35.480 "reconnect_delay_sec": 0, 00:23:35.480 "fast_io_fail_timeout_sec": 0, 00:23:35.480 "disable_auto_failback": false, 00:23:35.480 "generate_uuids": false, 00:23:35.480 "transport_tos": 0, 00:23:35.480 "nvme_error_stat": false, 00:23:35.480 "rdma_srq_size": 0, 00:23:35.480 "io_path_stat": false, 00:23:35.480 "allow_accel_sequence": false, 00:23:35.480 "rdma_max_cq_size": 0, 00:23:35.480 "rdma_cm_event_timeout_ms": 0, 00:23:35.480 "dhchap_digests": [ 00:23:35.480 "sha256", 00:23:35.480 "sha384", 00:23:35.480 "sha512" 00:23:35.480 ], 00:23:35.480 "dhchap_dhgroups": [ 00:23:35.480 "null", 00:23:35.480 "ffdhe2048", 00:23:35.480 "ffdhe3072", 00:23:35.480 "ffdhe4096", 00:23:35.480 "ffdhe6144", 00:23:35.480 "ffdhe8192" 00:23:35.480 ] 00:23:35.480 } 00:23:35.480 }, 00:23:35.480 { 00:23:35.480 "method": "bdev_nvme_attach_controller", 00:23:35.480 "params": { 00:23:35.480 "name": "nvme0", 00:23:35.480 "trtype": "TCP", 00:23:35.480 "adrfam": "IPv4", 00:23:35.480 "traddr": "10.0.0.2", 00:23:35.480 "trsvcid": "4420", 00:23:35.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.480 "prchk_reftag": false, 00:23:35.480 "prchk_guard": false, 00:23:35.480 "ctrlr_loss_timeout_sec": 0, 00:23:35.480 "reconnect_delay_sec": 0, 00:23:35.480 "fast_io_fail_timeout_sec": 0, 00:23:35.480 "psk": "key0", 00:23:35.480 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:35.480 "hdgst": false, 00:23:35.480 "ddgst": false 00:23:35.480 } 00:23:35.480 }, 00:23:35.480 { 00:23:35.480 "method": "bdev_nvme_set_hotplug", 00:23:35.480 "params": { 00:23:35.480 "period_us": 100000, 00:23:35.480 "enable": false 00:23:35.480 } 00:23:35.480 }, 00:23:35.480 { 00:23:35.480 "method": "bdev_enable_histogram", 00:23:35.480 "params": { 00:23:35.480 "name": "nvme0n1", 00:23:35.480 "enable": true 00:23:35.480 } 00:23:35.480 }, 00:23:35.480 { 00:23:35.480 "method": "bdev_wait_for_examine" 00:23:35.480 } 00:23:35.480 ] 00:23:35.480 }, 00:23:35.480 { 00:23:35.480 "subsystem": "nbd", 00:23:35.480 "config": [] 00:23:35.480 } 00:23:35.480 ] 00:23:35.480 }' 00:23:35.480 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 188076 00:23:35.480 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 188076 ']' 00:23:35.480 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 188076 00:23:35.480 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:35.480 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:35.480 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 188076 00:23:35.480 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:35.480 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:35.480 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 188076' 00:23:35.480 killing process with pid 188076 00:23:35.480 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 188076 00:23:35.480 Received shutdown signal, test time was about 1.000000 seconds 00:23:35.480 00:23:35.480 Latency(us) 00:23:35.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.480 =================================================================================================================== 00:23:35.480 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:35.480 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 188076 00:23:36.421 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:23:36.421 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 187871 00:23:36.421 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 187871 ']' 00:23:36.421 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 187871 00:23:36.421 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:36.421 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:36.421 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 187871 00:23:36.421 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:36.421 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:36.421 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 187871' 00:23:36.421 killing process with pid 187871 00:23:36.421 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 187871 00:23:36.421 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 187871 00:23:37.801 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:23:37.801 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:23:37.801 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:23:37.801 "subsystems": [ 00:23:37.801 { 00:23:37.801 "subsystem": "keyring", 00:23:37.801 "config": [ 00:23:37.801 { 00:23:37.801 "method": "keyring_file_add_key", 00:23:37.801 "params": { 00:23:37.801 "name": "key0", 00:23:37.801 "path": "/tmp/tmp.Cnb6qXGBWU" 00:23:37.801 } 00:23:37.801 } 00:23:37.801 ] 00:23:37.801 }, 00:23:37.801 { 00:23:37.801 "subsystem": "iobuf", 00:23:37.801 "config": [ 00:23:37.801 { 00:23:37.801 "method": "iobuf_set_options", 00:23:37.801 "params": { 00:23:37.801 "small_pool_count": 8192, 00:23:37.801 "large_pool_count": 1024, 00:23:37.801 "small_bufsize": 8192, 00:23:37.801 "large_bufsize": 135168 00:23:37.801 } 00:23:37.801 } 00:23:37.801 ] 00:23:37.801 }, 00:23:37.801 { 00:23:37.801 "subsystem": "sock", 00:23:37.801 "config": [ 00:23:37.801 { 00:23:37.801 "method": "sock_set_default_impl", 00:23:37.801 "params": { 00:23:37.802 "impl_name": "posix" 00:23:37.802 } 00:23:37.802 }, 00:23:37.802 { 00:23:37.802 "method": "sock_impl_set_options", 00:23:37.802 "params": { 00:23:37.802 "impl_name": "ssl", 00:23:37.802 "recv_buf_size": 4096, 00:23:37.802 "send_buf_size": 4096, 00:23:37.802 "enable_recv_pipe": true, 00:23:37.802 "enable_quickack": false, 00:23:37.802 "enable_placement_id": 0, 00:23:37.802 "enable_zerocopy_send_server": true, 00:23:37.802 "enable_zerocopy_send_client": false, 00:23:37.802 "zerocopy_threshold": 0, 00:23:37.802 "tls_version": 0, 00:23:37.802 "enable_ktls": false 00:23:37.802 } 00:23:37.802 }, 00:23:37.802 { 00:23:37.802 "method": "sock_impl_set_options", 00:23:37.802 "params": { 00:23:37.802 "impl_name": "posix", 00:23:37.802 "recv_buf_size": 2097152, 00:23:37.802 "send_buf_size": 2097152, 00:23:37.802 "enable_recv_pipe": true, 00:23:37.802 "enable_quickack": false, 00:23:37.802 "enable_placement_id": 0, 00:23:37.802 "enable_zerocopy_send_server": true, 00:23:37.802 "enable_zerocopy_send_client": false, 00:23:37.802 "zerocopy_threshold": 0, 00:23:37.802 "tls_version": 0, 00:23:37.802 "enable_ktls": false 00:23:37.802 } 00:23:37.802 } 00:23:37.802 ] 00:23:37.802 }, 00:23:37.802 { 00:23:37.802 "subsystem": "vmd", 00:23:37.802 "config": [] 00:23:37.802 }, 00:23:37.802 { 00:23:37.802 "subsystem": "accel", 00:23:37.802 "config": [ 00:23:37.802 { 00:23:37.802 "method": "accel_set_options", 00:23:37.802 "params": { 00:23:37.802 "small_cache_size": 128, 00:23:37.802 "large_cache_size": 16, 00:23:37.802 "task_count": 2048, 00:23:37.802 "sequence_count": 2048, 00:23:37.802 "buf_count": 2048 00:23:37.802 } 00:23:37.802 } 00:23:37.802 ] 00:23:37.802 }, 00:23:37.802 { 00:23:37.802 "subsystem": "bdev", 00:23:37.802 "config": [ 00:23:37.802 { 00:23:37.802 "method": "bdev_set_options", 00:23:37.802 "params": { 00:23:37.802 "bdev_io_pool_size": 65535, 00:23:37.802 "bdev_io_cache_size": 256, 00:23:37.802 "bdev_auto_examine": true, 00:23:37.802 "iobuf_small_cache_size": 128, 00:23:37.802 "iobuf_large_cache_size": 16 00:23:37.802 } 00:23:37.802 }, 00:23:37.802 { 00:23:37.802 "method": "bdev_raid_set_options", 00:23:37.802 "params": { 00:23:37.802 "process_window_size_kb": 1024, 00:23:37.802 "process_max_bandwidth_mb_sec": 0 00:23:37.802 } 00:23:37.802 }, 00:23:37.802 { 00:23:37.802 "method": "bdev_iscsi_set_options", 00:23:37.802 "params": { 00:23:37.802 "timeout_sec": 30 00:23:37.802 } 00:23:37.802 }, 00:23:37.802 { 00:23:37.802 "method": "bdev_nvme_set_options", 00:23:37.802 "params": { 00:23:37.802 "action_on_timeout": "none", 00:23:37.802 "timeout_us": 0, 00:23:37.802 "timeout_admin_us": 0, 00:23:37.802 "keep_alive_timeout_ms": 10000, 00:23:37.802 "arbitration_burst": 0, 00:23:37.802 "low_priority_weight": 0, 00:23:37.802 "medium_priority_weight": 0, 00:23:37.802 "high_priority_weight": 0, 00:23:37.802 "nvme_adminq_poll_period_us": 10000, 00:23:37.802 "nvme_ioq_poll_period_us": 0, 00:23:37.802 "io_queue_requests": 0, 00:23:37.802 "delay_cmd_submit": true, 00:23:37.802 "transport_retry_count": 4, 00:23:37.802 "bdev_retry_count": 3, 00:23:37.802 "transport_ack_timeout": 0, 00:23:37.802 "ctrlr_loss_timeout_sec": 0, 00:23:37.802 "reconnect_delay_sec": 0, 00:23:37.802 "fast_io_fail_timeout_sec": 0, 00:23:37.802 "disable_auto_failback": false, 00:23:37.802 "generate_uuids": false, 00:23:37.802 "transport_tos": 0, 00:23:37.802 "nvme_error_stat": false, 00:23:37.802 "rdma_srq_size": 0, 00:23:37.802 "io_path_stat": false, 00:23:37.802 "allow_accel_sequence": false, 00:23:37.802 "rdma_max_cq_size": 0, 00:23:37.802 "rdma_cm_event_timeout_ms": 0, 00:23:37.802 "dhchap_digests": [ 00:23:37.802 "sha256", 00:23:37.802 "sha384", 00:23:37.802 "sha512" 00:23:37.802 ], 00:23:37.802 "dhchap_dhgroups": [ 00:23:37.802 "null", 00:23:37.802 "ffdhe2048", 00:23:37.802 "ffdhe3072", 00:23:37.802 "ffdhe4096", 00:23:37.802 "ffdhe6144", 00:23:37.802 "ffdhe8192" 00:23:37.802 ] 00:23:37.802 } 00:23:37.802 }, 00:23:37.802 { 00:23:37.802 "method": "bdev_nvme_set_hotplug", 00:23:37.802 "params": { 00:23:37.802 "period_us": 100000, 00:23:37.802 "enable": false 00:23:37.802 } 00:23:37.802 }, 00:23:37.802 { 00:23:37.802 "method": "bdev_malloc_create", 00:23:37.802 "params": { 00:23:37.802 "name": "malloc0", 00:23:37.802 "num_blocks": 8192, 00:23:37.802 "block_size": 4096, 00:23:37.802 "physical_block_size": 4096, 00:23:37.802 "uuid": "291e35e5-7311-4ec0-bfac-f3fc4026ed64", 00:23:37.802 "optimal_io_boundary": 0, 00:23:37.802 "md_size": 0, 00:23:37.802 "dif_type": 0, 00:23:37.802 "dif_is_head_of_md": false, 00:23:37.802 "dif_pi_format": 0 00:23:37.802 } 00:23:37.802 }, 00:23:37.802 { 00:23:37.802 "method": "bdev_wait_for_examine" 00:23:37.802 } 00:23:37.802 ] 00:23:37.802 }, 00:23:37.802 { 00:23:37.802 "subsystem": "nbd", 00:23:37.802 "config": [] 00:23:37.802 }, 00:23:37.802 { 00:23:37.802 "subsystem": "scheduler", 00:23:37.802 "config": [ 00:23:37.802 { 00:23:37.802 "method": "framework_set_scheduler", 00:23:37.802 "params": { 00:23:37.802 "name": "static" 00:23:37.802 } 00:23:37.802 } 00:23:37.802 ] 00:23:37.802 }, 00:23:37.802 { 00:23:37.802 "subsystem": "nvmf", 00:23:37.802 "config": [ 00:23:37.802 { 00:23:37.802 "method": "nvmf_set_config", 00:23:37.802 "params": { 00:23:37.802 "discovery_filter": "match_any", 00:23:37.802 "admin_cmd_passthru": { 00:23:37.802 "identify_ctrlr": false 00:23:37.802 } 00:23:37.802 } 00:23:37.802 }, 00:23:37.802 { 00:23:37.802 "method": "nvmf_set_max_subsystems", 00:23:37.802 "params": { 00:23:37.802 "max_subsystems": 1024 00:23:37.802 } 00:23:37.802 }, 00:23:37.802 { 00:23:37.802 "method": "nvmf_set_crdt", 00:23:37.802 "params": { 00:23:37.802 "crdt1": 0, 00:23:37.802 "crdt2": 0, 00:23:37.802 "crdt3": 0 00:23:37.802 } 00:23:37.802 }, 00:23:37.802 { 00:23:37.802 "method": "nvmf_create_transport", 00:23:37.802 "params": { 00:23:37.802 "trtype": "TCP", 00:23:37.802 "max_queue_depth": 128, 00:23:37.802 "max_io_qpairs_per_ctrlr": 127, 00:23:37.802 "in_capsule_data_size": 4096, 00:23:37.802 "max_io_size": 131072, 00:23:37.802 "io_unit_size": 131072, 00:23:37.802 "max_aq_depth": 128, 00:23:37.802 "num_shared_buffers": 511, 00:23:37.802 "buf_cache_size": 4294967295, 00:23:37.802 "dif_insert_or_strip": false, 00:23:37.802 "zcopy": false, 00:23:37.802 "c2h_success": false, 00:23:37.802 "sock_priority": 0, 00:23:37.802 "abort_timeout_sec": 1, 00:23:37.802 " 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:37.802 ack_timeout": 0, 00:23:37.802 "data_wr_pool_size": 0 00:23:37.802 } 00:23:37.802 }, 00:23:37.802 { 00:23:37.802 "method": "nvmf_create_subsystem", 00:23:37.802 "params": { 00:23:37.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.802 "allow_any_host": false, 00:23:37.802 "serial_number": "00000000000000000000", 00:23:37.802 "model_number": "SPDK bdev Controller", 00:23:37.802 "max_namespaces": 32, 00:23:37.802 "min_cntlid": 1, 00:23:37.802 "max_cntlid": 65519, 00:23:37.802 "ana_reporting": false 00:23:37.802 } 00:23:37.802 }, 00:23:37.802 { 00:23:37.802 "method": "nvmf_subsystem_add_host", 00:23:37.802 "params": { 00:23:37.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.802 "host": "nqn.2016-06.io.spdk:host1", 00:23:37.802 "psk": "key0" 00:23:37.802 } 00:23:37.802 }, 00:23:37.802 { 00:23:37.802 "method": "nvmf_subsystem_add_ns", 00:23:37.802 "params": { 00:23:37.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.802 "namespace": { 00:23:37.802 "nsid": 1, 00:23:37.802 "bdev_name": "malloc0", 00:23:37.802 "nguid": "291E35E573114EC0BFACF3FC4026ED64", 00:23:37.802 "uuid": "291e35e5-7311-4ec0-bfac-f3fc4026ed64", 00:23:37.802 "no_auto_visible": false 00:23:37.802 } 00:23:37.802 } 00:23:37.802 }, 00:23:37.802 { 00:23:37.802 "method": "nvmf_subsystem_add_listener", 00:23:37.802 "params": { 00:23:37.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.802 "listen_address": { 00:23:37.802 "trtype": "TCP", 00:23:37.802 "adrfam": "IPv4", 00:23:37.802 "traddr": "10.0.0.2", 00:23:37.802 "trsvcid": "4420" 00:23:37.802 }, 00:23:37.802 "secure_channel": false, 00:23:37.802 "sock_impl": "ssl" 00:23:37.803 } 00:23:37.803 } 00:23:37.803 ] 00:23:37.803 } 00:23:37.803 ] 00:23:37.803 }' 00:23:37.803 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:37.803 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.803 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=188715 00:23:37.803 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:37.803 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 188715 00:23:37.803 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 188715 ']' 00:23:37.803 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.803 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:37.803 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.803 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:37.803 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.803 [2024-07-26 06:15:49.134186] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:37.803 [2024-07-26 06:15:49.134332] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.063 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.063 [2024-07-26 06:15:49.266500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.323 [2024-07-26 06:15:49.499845] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.323 [2024-07-26 06:15:49.499922] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.323 [2024-07-26 06:15:49.499945] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.323 [2024-07-26 06:15:49.499966] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.323 [2024-07-26 06:15:49.499984] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.323 [2024-07-26 06:15:49.500138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.890 [2024-07-26 06:15:50.033423] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.890 [2024-07-26 06:15:50.065442] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:38.890 [2024-07-26 06:15:50.065766] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:38.890 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:38.890 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:38.890 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:38.890 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:38.890 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.890 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.890 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=188817 00:23:38.890 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 188817 /var/tmp/bdevperf.sock 00:23:38.890 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 188817 ']' 00:23:38.890 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.890 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:38.890 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:38.890 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:23:38.890 "subsystems": [ 00:23:38.890 { 00:23:38.890 "subsystem": "keyring", 00:23:38.890 "config": [ 00:23:38.890 { 00:23:38.890 "method": "keyring_file_add_key", 00:23:38.890 "params": { 00:23:38.890 "name": "key0", 00:23:38.890 "path": "/tmp/tmp.Cnb6qXGBWU" 00:23:38.890 } 00:23:38.891 } 00:23:38.891 ] 00:23:38.891 }, 00:23:38.891 { 00:23:38.891 "subsystem": "iobuf", 00:23:38.891 "config": [ 00:23:38.891 { 00:23:38.891 "method": "iobuf_set_options", 00:23:38.891 "params": { 00:23:38.891 "small_pool_count": 8192, 00:23:38.891 "large_pool_count": 1024, 00:23:38.891 "small_bufsize": 8192, 00:23:38.891 "large_bufsize": 135168 00:23:38.891 } 00:23:38.891 } 00:23:38.891 ] 00:23:38.891 }, 00:23:38.891 { 00:23:38.891 "subsystem": "sock", 00:23:38.891 "config": [ 00:23:38.891 { 00:23:38.891 "method": "sock_set_default_impl", 00:23:38.891 "params": { 00:23:38.891 "impl_name": "posix" 00:23:38.891 } 00:23:38.891 }, 00:23:38.891 { 00:23:38.891 "method": "sock_impl_set_options", 00:23:38.891 "params": { 00:23:38.891 "impl_name": "ssl", 00:23:38.891 "recv_buf_size": 4096, 00:23:38.891 "send_buf_size": 4096, 00:23:38.891 "enable_recv_pipe": true, 00:23:38.891 "enable_quickack": false, 00:23:38.891 "enable_placement_id": 0, 00:23:38.891 "enable_zerocopy_send_server": true, 00:23:38.891 "enable_zerocopy_send_client": false, 00:23:38.891 "zerocopy_threshold": 0, 00:23:38.891 "tls_version": 0, 00:23:38.891 "enable_ktls": false 00:23:38.891 } 00:23:38.891 }, 00:23:38.891 { 00:23:38.891 "method": "sock_impl_set_options", 00:23:38.891 "params": { 00:23:38.891 "impl_name": "posix", 00:23:38.891 "recv_buf_size": 2097152, 00:23:38.891 "send_buf_size": 2097152, 00:23:38.891 "enable_recv_pipe": true, 00:23:38.891 "enable_quickack": false, 00:23:38.891 "enable_placement_id": 0, 00:23:38.891 "enable_zerocopy_send_server": true, 00:23:38.891 "enable_zerocopy_send_client": false, 00:23:38.891 "zerocopy_threshold": 0, 00:23:38.891 "tls_version": 0, 00:23:38.891 "enable_ktls": false 00:23:38.891 } 00:23:38.891 } 00:23:38.891 ] 00:23:38.891 }, 00:23:38.891 { 00:23:38.891 "subsystem": "vmd", 00:23:38.891 "config": [] 00:23:38.891 }, 00:23:38.891 { 00:23:38.891 "subsystem": "accel", 00:23:38.891 "config": [ 00:23:38.891 { 00:23:38.891 "method": "accel_set_options", 00:23:38.891 "params": { 00:23:38.891 "small_cache_size": 128, 00:23:38.891 "large_cache_size": 16, 00:23:38.891 "task_count": 2048, 00:23:38.891 "sequence_count": 2048, 00:23:38.891 "buf_count": 2048 00:23:38.891 } 00:23:38.891 } 00:23:38.891 ] 00:23:38.891 }, 00:23:38.891 { 00:23:38.891 "subsystem": "bdev", 00:23:38.891 "config": [ 00:23:38.891 { 00:23:38.891 "method": "bdev_set_options", 00:23:38.891 "params": { 00:23:38.891 "bdev_io_pool_size": 65535, 00:23:38.891 "bdev_io_cache_size": 256, 00:23:38.891 "bdev_auto_examine": true, 00:23:38.891 "iobuf_small_cache_size": 128, 00:23:38.891 "iobuf_large_cache_size": 16 00:23:38.891 } 00:23:38.891 }, 00:23:38.891 { 00:23:38.891 "method": "bdev_raid_set_options", 00:23:38.891 "params": { 00:23:38.891 "process_window_size_kb": 1024, 00:23:38.891 "process_max_bandwidth_mb_sec": 0 00:23:38.891 } 00:23:38.891 }, 00:23:38.891 { 00:23:38.891 "method": "bdev_iscsi_set_options", 00:23:38.891 "params": { 00:23:38.891 "timeout_sec": 30 00:23:38.891 } 00:23:38.891 }, 00:23:38.891 { 00:23:38.891 "method": "bdev_nvme_set_options", 00:23:38.891 "params": { 00:23:38.891 "action_on_timeout": "none", 00:23:38.891 "timeout_us": 0, 00:23:38.891 "timeout_admin_us": 0, 00:23:38.891 "keep_alive_timeout_ms": 10000, 00:23:38.891 "arbitration_burst": 0, 00:23:38.891 "low_priority_weight": 0, 00:23:38.891 "medium_priority_weight": 0, 00:23:38.891 "high_priority_weight": 0, 00:23:38.891 "nvme_adminq_poll_period_us": 10000, 00:23:38.891 "nvme_ioq_poll_period_us": 0, 00:23:38.891 "io_queue_requests": 512, 00:23:38.891 "delay_cmd_submit": true, 00:23:38.891 "transport_retry_count": 4, 00:23:38.891 "bdev_retry_count": 3, 00:23:38.891 "transport_ack_timeout": 0, 00:23:38.891 "ctrlr_loss_timeout_sec": 0, 00:23:38.891 "reconnect_delay_sec": 0, 00:23:38.891 "fast_io_fail_timeout_sec": 0, 00:23:38.891 "disable_auto_failback": false, 00:23:38.891 "generate_uuids": false, 00:23:38.891 "transport_tos": 0, 00:23:38.891 "nvme_error_stat": false, 00:23:38.891 "rdma_srq_size": 0, 00:23:38.891 "io_path_stat": false, 00:23:38.891 "allow_accel_sequence": false, 00:23:38.891 "rdma_max_cq_size": 0, 00:23:38.891 "rdma_cm_event_timeout_ms": 0, 00:23:38.891 "dhchap_digests": [ 00:23:38.891 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.891 "sha256", 00:23:38.891 "sha384", 00:23:38.891 "sha512" 00:23:38.891 ], 00:23:38.891 "dhchap_dhgroups": [ 00:23:38.891 "null", 00:23:38.891 "ffdhe2048", 00:23:38.891 "ffdhe3072", 00:23:38.891 "ffdhe4096", 00:23:38.891 "ffdhe6144", 00:23:38.891 "ffdhe8192" 00:23:38.891 ] 00:23:38.891 } 00:23:38.891 }, 00:23:38.891 { 00:23:38.891 "method": "bdev_nvme_attach_controller", 00:23:38.891 "params": { 00:23:38.891 "name": "nvme0", 00:23:38.891 "trtype": "TCP", 00:23:38.891 "adrfam": "IPv4", 00:23:38.891 "traddr": "10.0.0.2", 00:23:38.891 "trsvcid": "4420", 00:23:38.891 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.891 "prchk_reftag": false, 00:23:38.891 "prchk_guard": false, 00:23:38.891 "ctrlr_loss_timeout_sec": 0, 00:23:38.891 "reconnect_delay_sec": 0, 00:23:38.891 "fast_io_fail_timeout_sec": 0, 00:23:38.891 "psk": "key0", 00:23:38.891 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:38.891 "hdgst": false, 00:23:38.891 "ddgst": false 00:23:38.891 } 00:23:38.891 }, 00:23:38.891 { 00:23:38.891 "method": "bdev_nvme_set_hotplug", 00:23:38.891 "params": { 00:23:38.891 "period_us": 100000, 00:23:38.891 "enable": false 00:23:38.891 } 00:23:38.891 }, 00:23:38.891 { 00:23:38.891 "method": "bdev_enable_histogram", 00:23:38.891 "params": { 00:23:38.891 "name": "nvme0n1", 00:23:38.891 "enable": true 00:23:38.891 } 00:23:38.891 }, 00:23:38.891 { 00:23:38.891 "method": "bdev_wait_for_examine" 00:23:38.891 } 00:23:38.891 ] 00:23:38.891 }, 00:23:38.891 { 00:23:38.891 "subsystem": "nbd", 00:23:38.891 "config": [] 00:23:38.891 } 00:23:38.891 ] 00:23:38.891 }' 00:23:38.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.891 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:38.891 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.891 [2024-07-26 06:15:50.201763] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:38.891 [2024-07-26 06:15:50.201914] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid188817 ] 00:23:39.151 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.151 [2024-07-26 06:15:50.336244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.411 [2024-07-26 06:15:50.594999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.670 [2024-07-26 06:15:50.993444] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:39.929 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:39.929 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:39.929 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:39.929 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:23:40.187 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.187 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:40.446 Running I/O for 1 seconds... 00:23:41.383 00:23:41.383 Latency(us) 00:23:41.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.383 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:41.383 Verification LBA range: start 0x0 length 0x2000 00:23:41.383 nvme0n1 : 1.04 2485.43 9.71 0.00 0.00 50735.24 14563.56 51652.08 00:23:41.383 =================================================================================================================== 00:23:41.383 Total : 2485.43 9.71 0.00 0.00 50735.24 14563.56 51652.08 00:23:41.383 0 00:23:41.383 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:23:41.383 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:23:41.383 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:41.383 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:23:41.383 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:23:41.383 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:23:41.383 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:41.383 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:23:41.383 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:23:41.383 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:23:41.383 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:41.383 nvmf_trace.0 00:23:41.383 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:23:41.383 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 188817 00:23:41.383 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 188817 ']' 00:23:41.383 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 188817 00:23:41.383 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:41.383 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:41.383 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 188817 00:23:41.383 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:41.383 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:41.384 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 188817' 00:23:41.384 killing process with pid 188817 00:23:41.384 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 188817 00:23:41.384 Received shutdown signal, test time was about 1.000000 seconds 00:23:41.384 00:23:41.384 Latency(us) 00:23:41.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.384 =================================================================================================================== 00:23:41.384 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:41.384 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 188817 00:23:42.758 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:23:42.758 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:42.758 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:42.758 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:42.758 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:42.758 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:42.758 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:42.758 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:42.758 rmmod nvme_tcp 00:23:42.758 rmmod nvme_fabrics 00:23:42.758 rmmod nvme_keyring 00:23:42.758 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:42.758 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:42.758 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:42.758 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 188715 ']' 00:23:42.758 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 188715 00:23:42.758 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 188715 ']' 00:23:42.758 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 188715 00:23:42.758 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:42.758 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:42.758 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 188715 00:23:42.758 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:42.758 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:42.758 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 188715' 00:23:42.758 killing process with pid 188715 00:23:42.758 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 188715 00:23:42.758 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 188715 00:23:44.140 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:23:44.140 06:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:44.140 06:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:44.140 06:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:44.140 06:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:44.140 06:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:44.140 06:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.140 06:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.140 06:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.071 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:46.071 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.RvJxr0t4tR /tmp/tmp.PsXrQxYBXI /tmp/tmp.Cnb6qXGBWU 00:23:46.071 00:23:46.071 real 1m49.675s 00:23:46.071 user 2m45.506s 00:23:46.071 sys 0m28.434s 00:23:46.071 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:46.071 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.071 ************************************ 00:23:46.071 END TEST nvmf_tls 00:23:46.071 ************************************ 00:23:46.071 06:15:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:46.071 06:15:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:46.071 06:15:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:46.071 06:15:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:46.071 ************************************ 00:23:46.071 START TEST nvmf_fips 00:23:46.071 ************************************ 00:23:46.071 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:46.071 * Looking for test storage... 00:23:46.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:46.071 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.071 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:46.071 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.071 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.071 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.071 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.071 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.071 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:46.072 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:46.332 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:23:46.333 Error setting digest 00:23:46.333 00A24C2EBF7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:46.333 00A24C2EBF7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:46.333 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:48.239 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:48.239 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:48.239 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:48.240 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:48.240 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:48.240 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:48.499 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:48.499 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:48.499 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:48.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:48.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:23:48.499 00:23:48.499 --- 10.0.0.2 ping statistics --- 00:23:48.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.499 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:23:48.499 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:48.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:48.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:23:48.499 00:23:48.499 --- 10.0.0.1 ping statistics --- 00:23:48.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.499 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:23:48.499 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:48.499 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:48.499 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:48.499 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:48.499 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:48.499 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:48.499 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:48.499 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:48.499 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:48.499 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:48.499 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:48.499 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:48.499 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:48.499 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=191432 00:23:48.499 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:48.499 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 191432 00:23:48.499 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 191432 ']' 00:23:48.499 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.499 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:48.499 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.499 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:48.499 06:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:48.499 [2024-07-26 06:15:59.752368] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:48.499 [2024-07-26 06:15:59.752504] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.499 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.757 [2024-07-26 06:15:59.893073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.015 [2024-07-26 06:16:00.156510] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.015 [2024-07-26 06:16:00.156598] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.015 [2024-07-26 06:16:00.156627] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.015 [2024-07-26 06:16:00.156648] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.015 [2024-07-26 06:16:00.156670] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.015 [2024-07-26 06:16:00.156722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.582 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:49.582 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:23:49.582 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:49.582 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:49.582 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:49.582 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:49.582 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:49.582 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:49.582 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:49.582 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:49.582 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:49.582 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:49.582 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:49.582 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:49.582 [2024-07-26 06:16:00.872455] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:49.582 [2024-07-26 06:16:00.888392] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:49.582 [2024-07-26 06:16:00.888733] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:49.840 [2024-07-26 06:16:00.964660] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:49.840 malloc0 00:23:49.840 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:49.840 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=191586 00:23:49.840 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:49.840 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 191586 /var/tmp/bdevperf.sock 00:23:49.840 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 191586 ']' 00:23:49.840 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:49.840 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:49.840 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:49.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:49.840 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:49.840 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:49.840 [2024-07-26 06:16:01.101221] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:49.840 [2024-07-26 06:16:01.101386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid191586 ] 00:23:49.840 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.099 [2024-07-26 06:16:01.224141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.358 [2024-07-26 06:16:01.453004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.924 06:16:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:50.924 06:16:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:23:50.924 06:16:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:50.924 [2024-07-26 06:16:02.200821] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:50.924 [2024-07-26 06:16:02.201002] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:51.181 TLSTESTn1 00:23:51.181 06:16:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:51.181 Running I/O for 10 seconds... 00:24:01.170 00:24:01.170 Latency(us) 00:24:01.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.170 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:01.170 Verification LBA range: start 0x0 length 0x2000 00:24:01.170 TLSTESTn1 : 10.05 1433.78 5.60 0.00 0.00 89118.49 18058.81 78060.66 00:24:01.170 =================================================================================================================== 00:24:01.170 Total : 1433.78 5.60 0.00 0.00 89118.49 18058.81 78060.66 00:24:01.170 0 00:24:01.170 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:01.170 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:01.170 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:24:01.170 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:24:01.170 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:01.170 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:01.170 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:01.170 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:01.170 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:01.170 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:01.170 nvmf_trace.0 00:24:01.431 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:24:01.431 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 191586 00:24:01.431 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 191586 ']' 00:24:01.431 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 191586 00:24:01.431 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:24:01.431 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:01.431 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 191586 00:24:01.431 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:01.431 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:01.431 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 191586' 00:24:01.431 killing process with pid 191586 00:24:01.431 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 191586 00:24:01.431 Received shutdown signal, test time was about 10.000000 seconds 00:24:01.431 00:24:01.431 Latency(us) 00:24:01.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.431 =================================================================================================================== 00:24:01.431 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:01.431 [2024-07-26 06:16:12.596490] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:01.431 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 191586 00:24:02.365 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:24:02.365 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:02.365 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:02.365 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:02.365 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:02.365 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:02.365 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:02.365 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:02.365 rmmod nvme_tcp 00:24:02.365 rmmod nvme_fabrics 00:24:02.365 rmmod nvme_keyring 00:24:02.365 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:02.365 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:02.365 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:02.365 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 191432 ']' 00:24:02.365 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 191432 00:24:02.365 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 191432 ']' 00:24:02.365 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 191432 00:24:02.365 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:24:02.365 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:02.365 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 191432 00:24:02.365 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:02.365 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:02.365 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 191432' 00:24:02.365 killing process with pid 191432 00:24:02.365 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 191432 00:24:02.365 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 191432 00:24:02.365 [2024-07-26 06:16:13.616710] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:03.746 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:24:03.746 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:03.746 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:03.746 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:03.746 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:03.746 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:03.746 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.746 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.746 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.281 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:06.281 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:06.281 00:24:06.281 real 0m19.770s 00:24:06.281 user 0m20.323s 00:24:06.281 sys 0m6.345s 00:24:06.281 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:06.281 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:06.281 ************************************ 00:24:06.281 END TEST nvmf_fips 00:24:06.281 ************************************ 00:24:06.281 06:16:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:24:06.281 06:16:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:06.281 06:16:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:06.282 ************************************ 00:24:06.282 START TEST nvmf_fuzz 00:24:06.282 ************************************ 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:06.282 * Looking for test storage... 00:24:06.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:24:06.282 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:08.219 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:08.219 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:08.219 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:08.219 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:08.219 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:08.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:24:08.220 00:24:08.220 --- 10.0.0.2 ping statistics --- 00:24:08.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.220 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:08.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:24:08.220 00:24:08.220 --- 10.0.0.1 ping statistics --- 00:24:08.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.220 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=195100 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 195100 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 195100 ']' 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:08.220 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:09.157 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:09.157 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:24:09.157 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:09.157 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.157 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:09.157 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.157 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:09.157 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.157 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:09.157 Malloc0 00:24:09.157 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.157 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:09.157 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.157 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:09.157 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.157 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:09.157 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.157 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:09.157 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.157 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:09.157 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.157 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:09.157 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.157 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:09.157 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:41.248 Fuzzing completed. Shutting down the fuzz application 00:24:41.248 00:24:41.248 Dumping successful admin opcodes: 00:24:41.248 8, 9, 10, 24, 00:24:41.248 Dumping successful io opcodes: 00:24:41.248 0, 9, 00:24:41.248 NS: 0x200003aefec0 I/O qp, Total commands completed: 332821, total successful commands: 1974, random_seed: 2718623104 00:24:41.248 NS: 0x200003aefec0 admin qp, Total commands completed: 41920, total successful commands: 341, random_seed: 1818651712 00:24:41.248 06:16:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:42.196 Fuzzing completed. Shutting down the fuzz application 00:24:42.196 00:24:42.196 Dumping successful admin opcodes: 00:24:42.196 24, 00:24:42.196 Dumping successful io opcodes: 00:24:42.196 00:24:42.196 NS: 0x200003aefec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 816807305 00:24:42.196 NS: 0x200003aefec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 817030625 00:24:42.196 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:42.196 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.196 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:42.196 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.196 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:42.196 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:42.196 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:42.196 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:42.196 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:42.196 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:42.196 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:42.196 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:42.196 rmmod nvme_tcp 00:24:42.196 rmmod nvme_fabrics 00:24:42.196 rmmod nvme_keyring 00:24:42.196 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:42.196 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:42.196 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:42.196 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 195100 ']' 00:24:42.196 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 195100 00:24:42.196 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 195100 ']' 00:24:42.196 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 195100 00:24:42.196 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:24:42.196 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:42.196 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 195100 00:24:42.458 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:42.458 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:42.458 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 195100' 00:24:42.458 killing process with pid 195100 00:24:42.458 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 195100 00:24:42.458 06:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 195100 00:24:43.835 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:43.835 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:43.835 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:43.835 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:43.835 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:43.835 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.835 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.835 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.739 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:45.739 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:45.739 00:24:45.739 real 0m39.912s 00:24:45.739 user 0m58.139s 00:24:45.739 sys 0m12.866s 00:24:45.739 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:45.739 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:45.739 ************************************ 00:24:45.739 END TEST nvmf_fuzz 00:24:45.739 ************************************ 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:45.998 ************************************ 00:24:45.998 START TEST nvmf_multiconnection 00:24:45.998 ************************************ 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:45.998 * Looking for test storage... 00:24:45.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:45.998 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:47.905 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:47.905 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:47.905 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:47.906 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:47.906 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:47.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:47.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:24:47.906 00:24:47.906 --- 10.0.0.2 ping statistics --- 00:24:47.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.906 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:47.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:47.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:24:47.906 00:24:47.906 --- 10.0.0.1 ping statistics --- 00:24:47.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.906 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=201089 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 201089 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 201089 ']' 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:47.906 06:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.164 [2024-07-26 06:16:59.314180] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:48.164 [2024-07-26 06:16:59.314350] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:48.164 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.164 [2024-07-26 06:16:59.457477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:48.422 [2024-07-26 06:16:59.716784] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:48.422 [2024-07-26 06:16:59.716860] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:48.422 [2024-07-26 06:16:59.716889] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:48.422 [2024-07-26 06:16:59.716911] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:48.422 [2024-07-26 06:16:59.716932] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:48.422 [2024-07-26 06:16:59.717052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.422 [2024-07-26 06:16:59.717126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:48.422 [2024-07-26 06:16:59.717159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:48.422 [2024-07-26 06:16:59.717149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.990 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:48.990 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:24:48.990 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:48.990 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:48.990 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.990 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.990 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:48.990 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.990 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.990 [2024-07-26 06:17:00.251763] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.990 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.990 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:48.990 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.990 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:48.990 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.990 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.249 Malloc1 00:24:49.249 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.249 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:49.249 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.249 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.249 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.249 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:49.249 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.249 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.249 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.249 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:49.249 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.249 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.250 [2024-07-26 06:17:00.362019] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.250 Malloc2 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.250 Malloc3 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.250 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.526 Malloc4 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.526 Malloc5 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.526 Malloc6 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.526 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.798 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.798 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:49.798 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.798 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.798 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.798 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:49.798 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:49.798 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.798 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.798 Malloc7 00:24:49.798 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.798 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:49.798 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.798 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.798 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.798 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:49.798 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.798 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.798 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.798 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:49.798 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.798 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.798 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.798 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:49.798 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:49.798 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.798 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.798 Malloc8 00:24:49.798 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.798 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:49.798 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.798 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.798 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.798 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:49.798 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.798 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.798 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.798 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:49.798 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.798 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.799 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.799 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:49.799 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:49.799 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.799 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:50.056 Malloc9 00:24:50.056 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.056 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:50.056 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.056 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:50.056 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.056 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:50.057 Malloc10 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:50.057 Malloc11 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.057 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:50.994 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:50.994 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:50.994 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:50.994 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:50.994 06:17:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:52.898 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:52.898 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:52.898 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:24:52.898 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:52.898 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:52.898 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:52.898 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.898 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:53.463 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:53.463 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:53.463 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:53.463 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:53.463 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:55.991 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:55.992 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:55.992 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:24:55.992 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:55.992 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:55.992 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:55.992 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.992 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:56.251 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:56.251 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:56.251 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:56.251 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:56.251 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:58.154 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:58.154 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:58.154 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:24:58.413 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:58.413 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:58.413 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:58.413 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.413 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:58.981 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:58.981 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:58.981 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:58.981 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:58.981 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:01.513 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:01.513 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:01.513 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:25:01.513 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:01.513 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:01.513 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:01.513 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.513 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:01.771 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:01.771 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:01.771 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:01.771 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:01.771 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:03.675 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:03.675 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:03.675 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:25:03.675 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:03.675 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:03.675 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:03.675 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.676 06:17:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:04.612 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:04.612 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:04.612 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:04.613 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:04.613 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:06.518 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:06.518 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:06.518 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:25:06.801 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:06.801 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:06.801 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:06.801 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.801 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:07.371 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:07.371 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:07.371 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:07.371 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:07.371 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:09.906 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:09.906 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:09.906 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:25:09.906 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:09.906 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:09.906 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:09.906 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.906 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:10.475 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:10.475 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:10.475 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:10.475 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:10.475 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:12.380 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:12.380 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:12.380 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:25:12.380 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:12.380 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:12.380 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:12.380 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.380 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:13.315 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:13.315 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:13.315 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:13.315 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:13.315 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:15.221 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:15.221 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:15.221 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:25:15.221 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:15.221 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:15.221 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:15.221 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:15.221 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:16.158 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:16.158 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:16.158 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:16.158 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:16.158 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:18.064 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:18.064 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:18.064 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:25:18.064 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:18.064 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:18.064 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:18.064 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:18.064 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:19.000 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:19.000 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:19.000 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:19.000 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:19.000 06:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:20.901 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:20.901 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:20.901 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:25:20.901 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:20.901 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:20.901 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:20.901 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:20.901 [global] 00:25:20.901 thread=1 00:25:20.901 invalidate=1 00:25:20.901 rw=read 00:25:20.901 time_based=1 00:25:20.901 runtime=10 00:25:20.901 ioengine=libaio 00:25:20.901 direct=1 00:25:20.901 bs=262144 00:25:20.901 iodepth=64 00:25:20.901 norandommap=1 00:25:20.901 numjobs=1 00:25:20.901 00:25:20.901 [job0] 00:25:20.901 filename=/dev/nvme0n1 00:25:20.901 [job1] 00:25:20.901 filename=/dev/nvme10n1 00:25:20.901 [job2] 00:25:20.901 filename=/dev/nvme1n1 00:25:20.901 [job3] 00:25:20.901 filename=/dev/nvme2n1 00:25:20.901 [job4] 00:25:20.901 filename=/dev/nvme3n1 00:25:20.901 [job5] 00:25:20.901 filename=/dev/nvme4n1 00:25:20.901 [job6] 00:25:20.901 filename=/dev/nvme5n1 00:25:20.901 [job7] 00:25:20.901 filename=/dev/nvme6n1 00:25:20.901 [job8] 00:25:20.901 filename=/dev/nvme7n1 00:25:20.901 [job9] 00:25:20.901 filename=/dev/nvme8n1 00:25:20.901 [job10] 00:25:20.901 filename=/dev/nvme9n1 00:25:20.901 Could not set queue depth (nvme0n1) 00:25:20.901 Could not set queue depth (nvme10n1) 00:25:20.901 Could not set queue depth (nvme1n1) 00:25:20.901 Could not set queue depth (nvme2n1) 00:25:20.901 Could not set queue depth (nvme3n1) 00:25:20.901 Could not set queue depth (nvme4n1) 00:25:20.901 Could not set queue depth (nvme5n1) 00:25:20.901 Could not set queue depth (nvme6n1) 00:25:20.901 Could not set queue depth (nvme7n1) 00:25:20.901 Could not set queue depth (nvme8n1) 00:25:20.901 Could not set queue depth (nvme9n1) 00:25:21.159 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:21.159 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:21.159 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:21.159 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:21.159 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:21.159 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:21.159 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:21.159 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:21.159 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:21.159 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:21.159 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:21.159 fio-3.35 00:25:21.159 Starting 11 threads 00:25:33.387 00:25:33.387 job0: (groupid=0, jobs=1): err= 0: pid=205479: Fri Jul 26 06:17:42 2024 00:25:33.387 read: IOPS=659, BW=165MiB/s (173MB/s)(1661MiB/10078msec) 00:25:33.387 slat (usec): min=11, max=38002, avg=1487.34, stdev=4406.67 00:25:33.387 clat (msec): min=14, max=185, avg=95.53, stdev=28.17 00:25:33.387 lat (msec): min=14, max=185, avg=97.02, stdev=28.63 00:25:33.387 clat percentiles (msec): 00:25:33.387 | 1.00th=[ 37], 5.00th=[ 54], 10.00th=[ 61], 20.00th=[ 70], 00:25:33.387 | 30.00th=[ 80], 40.00th=[ 87], 50.00th=[ 94], 60.00th=[ 102], 00:25:33.387 | 70.00th=[ 110], 80.00th=[ 121], 90.00th=[ 136], 95.00th=[ 146], 00:25:33.387 | 99.00th=[ 159], 99.50th=[ 163], 99.90th=[ 176], 99.95th=[ 180], 00:25:33.387 | 99.99th=[ 186] 00:25:33.387 bw ( KiB/s): min=108032, max=284160, per=11.68%, avg=168422.40, stdev=46192.29, samples=20 00:25:33.387 iops : min= 422, max= 1110, avg=657.90, stdev=180.44, samples=20 00:25:33.387 lat (msec) : 20=0.23%, 50=3.22%, 100=54.99%, 250=41.56% 00:25:33.387 cpu : usr=0.36%, sys=2.07%, ctx=1098, majf=0, minf=4097 00:25:33.387 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:33.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.387 issued rwts: total=6643,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.387 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.387 job1: (groupid=0, jobs=1): err= 0: pid=205480: Fri Jul 26 06:17:42 2024 00:25:33.387 read: IOPS=503, BW=126MiB/s (132MB/s)(1274MiB/10125msec) 00:25:33.387 slat (usec): min=9, max=109900, avg=1201.89, stdev=5892.98 00:25:33.387 clat (usec): min=1489, max=402373, avg=125829.81, stdev=78464.17 00:25:33.387 lat (usec): min=1509, max=405251, avg=127031.70, stdev=79514.83 00:25:33.387 clat percentiles (msec): 00:25:33.387 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 23], 20.00th=[ 41], 00:25:33.387 | 30.00th=[ 66], 40.00th=[ 110], 50.00th=[ 131], 60.00th=[ 150], 00:25:33.387 | 70.00th=[ 169], 80.00th=[ 192], 90.00th=[ 230], 95.00th=[ 259], 00:25:33.387 | 99.00th=[ 321], 99.50th=[ 334], 99.90th=[ 351], 99.95th=[ 355], 00:25:33.387 | 99.99th=[ 401] 00:25:33.387 bw ( KiB/s): min=61440, max=310272, per=8.94%, avg=128883.85, stdev=62883.52, samples=20 00:25:33.387 iops : min= 240, max= 1212, avg=503.40, stdev=245.57, samples=20 00:25:33.387 lat (msec) : 2=0.06%, 4=0.84%, 10=3.34%, 20=4.77%, 50=17.54% 00:25:33.387 lat (msec) : 100=10.73%, 250=56.27%, 500=6.45% 00:25:33.387 cpu : usr=0.26%, sys=1.21%, ctx=1046, majf=0, minf=4097 00:25:33.387 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:33.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.387 issued rwts: total=5097,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.387 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.387 job2: (groupid=0, jobs=1): err= 0: pid=205483: Fri Jul 26 06:17:42 2024 00:25:33.387 read: IOPS=523, BW=131MiB/s (137MB/s)(1324MiB/10121msec) 00:25:33.388 slat (usec): min=9, max=166178, avg=783.51, stdev=6073.02 00:25:33.388 clat (usec): min=1185, max=365234, avg=121410.31, stdev=72386.89 00:25:33.388 lat (usec): min=1222, max=453469, avg=122193.82, stdev=73145.95 00:25:33.388 clat percentiles (msec): 00:25:33.388 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 14], 20.00th=[ 54], 00:25:33.388 | 30.00th=[ 79], 40.00th=[ 112], 50.00th=[ 131], 60.00th=[ 144], 00:25:33.388 | 70.00th=[ 157], 80.00th=[ 169], 90.00th=[ 218], 95.00th=[ 255], 00:25:33.388 | 99.00th=[ 296], 99.50th=[ 305], 99.90th=[ 326], 99.95th=[ 342], 00:25:33.388 | 99.99th=[ 368] 00:25:33.388 bw ( KiB/s): min=57344, max=267776, per=9.29%, avg=133973.00, stdev=48618.32, samples=20 00:25:33.388 iops : min= 224, max= 1046, avg=523.30, stdev=189.89, samples=20 00:25:33.388 lat (msec) : 2=0.13%, 4=1.79%, 10=6.02%, 20=5.25%, 50=5.93% 00:25:33.388 lat (msec) : 100=16.84%, 250=58.39%, 500=5.64% 00:25:33.388 cpu : usr=0.12%, sys=1.40%, ctx=1142, majf=0, minf=4097 00:25:33.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:33.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.388 issued rwts: total=5297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.388 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.388 job3: (groupid=0, jobs=1): err= 0: pid=205484: Fri Jul 26 06:17:42 2024 00:25:33.388 read: IOPS=516, BW=129MiB/s (135MB/s)(1305MiB/10118msec) 00:25:33.388 slat (usec): min=10, max=214347, avg=1644.65, stdev=6490.79 00:25:33.388 clat (usec): min=1677, max=336914, avg=122283.81, stdev=61009.30 00:25:33.388 lat (usec): min=1693, max=411366, avg=123928.46, stdev=61880.76 00:25:33.388 clat percentiles (msec): 00:25:33.388 | 1.00th=[ 4], 5.00th=[ 20], 10.00th=[ 34], 20.00th=[ 89], 00:25:33.388 | 30.00th=[ 102], 40.00th=[ 111], 50.00th=[ 123], 60.00th=[ 132], 00:25:33.388 | 70.00th=[ 144], 80.00th=[ 157], 90.00th=[ 186], 95.00th=[ 253], 00:25:33.388 | 99.00th=[ 305], 99.50th=[ 313], 99.90th=[ 338], 99.95th=[ 338], 00:25:33.388 | 99.99th=[ 338] 00:25:33.388 bw ( KiB/s): min=64512, max=218112, per=9.15%, avg=132044.95, stdev=39458.34, samples=20 00:25:33.388 iops : min= 252, max= 852, avg=515.75, stdev=154.14, samples=20 00:25:33.388 lat (msec) : 2=0.15%, 4=0.94%, 10=1.19%, 20=3.14%, 50=10.52% 00:25:33.388 lat (msec) : 100=12.49%, 250=66.12%, 500=5.46% 00:25:33.388 cpu : usr=0.18%, sys=1.60%, ctx=949, majf=0, minf=4097 00:25:33.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:33.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.388 issued rwts: total=5221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.388 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.388 job4: (groupid=0, jobs=1): err= 0: pid=205485: Fri Jul 26 06:17:42 2024 00:25:33.388 read: IOPS=592, BW=148MiB/s (155MB/s)(1494MiB/10086msec) 00:25:33.388 slat (usec): min=11, max=68237, avg=1596.86, stdev=4925.47 00:25:33.388 clat (msec): min=32, max=233, avg=106.34, stdev=44.10 00:25:33.388 lat (msec): min=32, max=233, avg=107.93, stdev=44.76 00:25:33.388 clat percentiles (msec): 00:25:33.388 | 1.00th=[ 36], 5.00th=[ 39], 10.00th=[ 57], 20.00th=[ 69], 00:25:33.388 | 30.00th=[ 75], 40.00th=[ 84], 50.00th=[ 97], 60.00th=[ 117], 00:25:33.388 | 70.00th=[ 136], 80.00th=[ 150], 90.00th=[ 167], 95.00th=[ 184], 00:25:33.388 | 99.00th=[ 205], 99.50th=[ 209], 99.90th=[ 220], 99.95th=[ 220], 00:25:33.388 | 99.99th=[ 234] 00:25:33.388 bw ( KiB/s): min=82944, max=305664, per=10.49%, avg=151330.05, stdev=62102.50, samples=20 00:25:33.388 iops : min= 324, max= 1194, avg=591.10, stdev=242.62, samples=20 00:25:33.388 lat (msec) : 50=9.35%, 100=42.37%, 250=48.28% 00:25:33.388 cpu : usr=0.42%, sys=1.95%, ctx=1022, majf=0, minf=4097 00:25:33.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:33.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.388 issued rwts: total=5976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.388 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.388 job5: (groupid=0, jobs=1): err= 0: pid=205492: Fri Jul 26 06:17:42 2024 00:25:33.388 read: IOPS=355, BW=88.9MiB/s (93.2MB/s)(900MiB/10125msec) 00:25:33.388 slat (usec): min=13, max=78663, avg=2645.36, stdev=7723.81 00:25:33.388 clat (msec): min=56, max=370, avg=177.16, stdev=47.24 00:25:33.388 lat (msec): min=56, max=385, avg=179.80, stdev=48.24 00:25:33.388 clat percentiles (msec): 00:25:33.388 | 1.00th=[ 77], 5.00th=[ 121], 10.00th=[ 130], 20.00th=[ 144], 00:25:33.388 | 30.00th=[ 150], 40.00th=[ 157], 50.00th=[ 165], 60.00th=[ 176], 00:25:33.388 | 70.00th=[ 192], 80.00th=[ 215], 90.00th=[ 245], 95.00th=[ 275], 00:25:33.388 | 99.00th=[ 321], 99.50th=[ 330], 99.90th=[ 342], 99.95th=[ 372], 00:25:33.388 | 99.99th=[ 372] 00:25:33.388 bw ( KiB/s): min=47616, max=121344, per=6.28%, avg=90547.40, stdev=20148.86, samples=20 00:25:33.388 iops : min= 186, max= 474, avg=353.65, stdev=78.74, samples=20 00:25:33.388 lat (msec) : 100=1.94%, 250=89.86%, 500=8.19% 00:25:33.388 cpu : usr=0.22%, sys=1.20%, ctx=706, majf=0, minf=4097 00:25:33.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:25:33.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.388 issued rwts: total=3601,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.388 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.388 job6: (groupid=0, jobs=1): err= 0: pid=205495: Fri Jul 26 06:17:42 2024 00:25:33.388 read: IOPS=445, BW=111MiB/s (117MB/s)(1129MiB/10125msec) 00:25:33.388 slat (usec): min=9, max=127596, avg=1755.30, stdev=7963.99 00:25:33.388 clat (usec): min=1361, max=370710, avg=141667.15, stdev=74720.28 00:25:33.388 lat (usec): min=1388, max=394618, avg=143422.45, stdev=76026.89 00:25:33.388 clat percentiles (msec): 00:25:33.388 | 1.00th=[ 5], 5.00th=[ 19], 10.00th=[ 42], 20.00th=[ 65], 00:25:33.388 | 30.00th=[ 94], 40.00th=[ 134], 50.00th=[ 150], 60.00th=[ 161], 00:25:33.388 | 70.00th=[ 176], 80.00th=[ 199], 90.00th=[ 243], 95.00th=[ 271], 00:25:33.388 | 99.00th=[ 317], 99.50th=[ 326], 99.90th=[ 363], 99.95th=[ 372], 00:25:33.388 | 99.99th=[ 372] 00:25:33.388 bw ( KiB/s): min=43520, max=220160, per=7.90%, avg=113936.05, stdev=49784.33, samples=20 00:25:33.388 iops : min= 170, max= 860, avg=445.05, stdev=194.48, samples=20 00:25:33.388 lat (msec) : 2=0.27%, 4=0.62%, 10=2.06%, 20=2.52%, 50=9.41% 00:25:33.388 lat (msec) : 100=16.06%, 250=60.11%, 500=8.95% 00:25:33.388 cpu : usr=0.18%, sys=1.17%, ctx=861, majf=0, minf=4097 00:25:33.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:33.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.388 issued rwts: total=4515,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.388 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.388 job7: (groupid=0, jobs=1): err= 0: pid=205496: Fri Jul 26 06:17:42 2024 00:25:33.388 read: IOPS=549, BW=137MiB/s (144MB/s)(1392MiB/10126msec) 00:25:33.388 slat (usec): min=9, max=89802, avg=1576.25, stdev=4872.74 00:25:33.388 clat (msec): min=2, max=251, avg=114.70, stdev=37.69 00:25:33.388 lat (msec): min=2, max=251, avg=116.28, stdev=38.21 00:25:33.388 clat percentiles (msec): 00:25:33.388 | 1.00th=[ 5], 5.00th=[ 60], 10.00th=[ 69], 20.00th=[ 81], 00:25:33.388 | 30.00th=[ 91], 40.00th=[ 104], 50.00th=[ 117], 60.00th=[ 131], 00:25:33.388 | 70.00th=[ 140], 80.00th=[ 150], 90.00th=[ 161], 95.00th=[ 169], 00:25:33.388 | 99.00th=[ 184], 99.50th=[ 190], 99.90th=[ 243], 99.95th=[ 251], 00:25:33.388 | 99.99th=[ 251] 00:25:33.388 bw ( KiB/s): min=99328, max=207360, per=9.77%, avg=140947.80, stdev=35124.14, samples=20 00:25:33.388 iops : min= 388, max= 810, avg=550.50, stdev=137.21, samples=20 00:25:33.388 lat (msec) : 4=0.68%, 10=0.93%, 20=0.02%, 50=1.85%, 100=33.79% 00:25:33.388 lat (msec) : 250=62.67%, 500=0.05% 00:25:33.388 cpu : usr=0.27%, sys=1.84%, ctx=1097, majf=0, minf=4097 00:25:33.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:33.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.388 issued rwts: total=5569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.388 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.388 job8: (groupid=0, jobs=1): err= 0: pid=205497: Fri Jul 26 06:17:42 2024 00:25:33.388 read: IOPS=490, BW=123MiB/s (128MB/s)(1241MiB/10123msec) 00:25:33.388 slat (usec): min=9, max=236156, avg=1567.72, stdev=7827.84 00:25:33.388 clat (usec): min=1361, max=490605, avg=128865.75, stdev=76945.48 00:25:33.388 lat (usec): min=1380, max=490649, avg=130433.46, stdev=78138.88 00:25:33.388 clat percentiles (msec): 00:25:33.388 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 18], 20.00th=[ 39], 00:25:33.388 | 30.00th=[ 90], 40.00th=[ 122], 50.00th=[ 142], 60.00th=[ 155], 00:25:33.388 | 70.00th=[ 169], 80.00th=[ 192], 90.00th=[ 230], 95.00th=[ 255], 00:25:33.388 | 99.00th=[ 296], 99.50th=[ 300], 99.90th=[ 338], 99.95th=[ 363], 00:25:33.388 | 99.99th=[ 489] 00:25:33.388 bw ( KiB/s): min=62976, max=406016, per=8.69%, avg=125407.95, stdev=75368.05, samples=20 00:25:33.388 iops : min= 246, max= 1586, avg=489.85, stdev=294.43, samples=20 00:25:33.388 lat (msec) : 2=0.46%, 4=1.41%, 10=5.44%, 20=3.39%, 50=14.45% 00:25:33.388 lat (msec) : 100=7.07%, 250=62.41%, 500=5.36% 00:25:33.388 cpu : usr=0.27%, sys=1.38%, ctx=981, majf=0, minf=4097 00:25:33.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:33.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.388 issued rwts: total=4962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.388 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.388 job9: (groupid=0, jobs=1): err= 0: pid=205503: Fri Jul 26 06:17:42 2024 00:25:33.388 read: IOPS=487, BW=122MiB/s (128MB/s)(1229MiB/10084msec) 00:25:33.388 slat (usec): min=9, max=89144, avg=1691.61, stdev=6746.30 00:25:33.388 clat (msec): min=4, max=335, avg=129.52, stdev=66.23 00:25:33.388 lat (msec): min=4, max=378, avg=131.21, stdev=67.54 00:25:33.389 clat percentiles (msec): 00:25:33.389 | 1.00th=[ 12], 5.00th=[ 32], 10.00th=[ 48], 20.00th=[ 68], 00:25:33.389 | 30.00th=[ 93], 40.00th=[ 111], 50.00th=[ 125], 60.00th=[ 140], 00:25:33.389 | 70.00th=[ 157], 80.00th=[ 174], 90.00th=[ 226], 95.00th=[ 264], 00:25:33.389 | 99.00th=[ 321], 99.50th=[ 326], 99.90th=[ 326], 99.95th=[ 334], 00:25:33.389 | 99.99th=[ 334] 00:25:33.389 bw ( KiB/s): min=47104, max=230912, per=8.61%, avg=124191.20, stdev=51494.43, samples=20 00:25:33.389 iops : min= 184, max= 902, avg=485.10, stdev=201.12, samples=20 00:25:33.389 lat (msec) : 10=0.85%, 20=2.03%, 50=7.83%, 100=21.49%, 250=61.75% 00:25:33.389 lat (msec) : 500=6.04% 00:25:33.389 cpu : usr=0.25%, sys=1.45%, ctx=761, majf=0, minf=3721 00:25:33.389 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:33.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.389 issued rwts: total=4915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.389 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.389 job10: (groupid=0, jobs=1): err= 0: pid=205512: Fri Jul 26 06:17:42 2024 00:25:33.389 read: IOPS=521, BW=130MiB/s (137MB/s)(1314MiB/10076msec) 00:25:33.389 slat (usec): min=13, max=83951, avg=1897.97, stdev=5443.49 00:25:33.389 clat (msec): min=65, max=221, avg=120.66, stdev=24.99 00:25:33.389 lat (msec): min=65, max=221, avg=122.56, stdev=25.20 00:25:33.389 clat percentiles (msec): 00:25:33.389 | 1.00th=[ 71], 5.00th=[ 83], 10.00th=[ 90], 20.00th=[ 99], 00:25:33.389 | 30.00th=[ 106], 40.00th=[ 113], 50.00th=[ 120], 60.00th=[ 126], 00:25:33.389 | 70.00th=[ 133], 80.00th=[ 142], 90.00th=[ 155], 95.00th=[ 165], 00:25:33.389 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 209], 99.95th=[ 222], 00:25:33.389 | 99.99th=[ 222] 00:25:33.389 bw ( KiB/s): min=105472, max=176640, per=9.22%, avg=132966.40, stdev=21847.15, samples=20 00:25:33.389 iops : min= 412, max= 690, avg=519.40, stdev=85.34, samples=20 00:25:33.389 lat (msec) : 100=21.69%, 250=78.31% 00:25:33.389 cpu : usr=0.31%, sys=1.75%, ctx=942, majf=0, minf=4097 00:25:33.389 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:33.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.389 issued rwts: total=5257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.389 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.389 00:25:33.389 Run status group 0 (all jobs): 00:25:33.389 READ: bw=1409MiB/s (1477MB/s), 88.9MiB/s-165MiB/s (93.2MB/s-173MB/s), io=13.9GiB (15.0GB), run=10076-10126msec 00:25:33.389 00:25:33.389 Disk stats (read/write): 00:25:33.389 nvme0n1: ios=13029/0, merge=0/0, ticks=1236294/0, in_queue=1236294, util=97.07% 00:25:33.389 nvme10n1: ios=10002/0, merge=0/0, ticks=1234798/0, in_queue=1234798, util=97.30% 00:25:33.389 nvme1n1: ios=10372/0, merge=0/0, ticks=1243454/0, in_queue=1243454, util=97.58% 00:25:33.389 nvme2n1: ios=10246/0, merge=0/0, ticks=1241585/0, in_queue=1241585, util=97.72% 00:25:33.389 nvme3n1: ios=11729/0, merge=0/0, ticks=1235812/0, in_queue=1235812, util=97.80% 00:25:33.389 nvme4n1: ios=7017/0, merge=0/0, ticks=1229238/0, in_queue=1229238, util=98.17% 00:25:33.389 nvme5n1: ios=8849/0, merge=0/0, ticks=1232282/0, in_queue=1232282, util=98.35% 00:25:33.389 nvme6n1: ios=10929/0, merge=0/0, ticks=1235517/0, in_queue=1235517, util=98.46% 00:25:33.389 nvme7n1: ios=9748/0, merge=0/0, ticks=1234500/0, in_queue=1234500, util=98.88% 00:25:33.389 nvme8n1: ios=9636/0, merge=0/0, ticks=1238692/0, in_queue=1238692, util=99.08% 00:25:33.389 nvme9n1: ios=10328/0, merge=0/0, ticks=1235099/0, in_queue=1235099, util=99.20% 00:25:33.389 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:33.389 [global] 00:25:33.389 thread=1 00:25:33.389 invalidate=1 00:25:33.389 rw=randwrite 00:25:33.389 time_based=1 00:25:33.389 runtime=10 00:25:33.389 ioengine=libaio 00:25:33.389 direct=1 00:25:33.389 bs=262144 00:25:33.389 iodepth=64 00:25:33.389 norandommap=1 00:25:33.389 numjobs=1 00:25:33.389 00:25:33.389 [job0] 00:25:33.389 filename=/dev/nvme0n1 00:25:33.389 [job1] 00:25:33.389 filename=/dev/nvme10n1 00:25:33.389 [job2] 00:25:33.389 filename=/dev/nvme1n1 00:25:33.389 [job3] 00:25:33.389 filename=/dev/nvme2n1 00:25:33.389 [job4] 00:25:33.389 filename=/dev/nvme3n1 00:25:33.389 [job5] 00:25:33.389 filename=/dev/nvme4n1 00:25:33.389 [job6] 00:25:33.389 filename=/dev/nvme5n1 00:25:33.389 [job7] 00:25:33.389 filename=/dev/nvme6n1 00:25:33.389 [job8] 00:25:33.389 filename=/dev/nvme7n1 00:25:33.389 [job9] 00:25:33.389 filename=/dev/nvme8n1 00:25:33.389 [job10] 00:25:33.389 filename=/dev/nvme9n1 00:25:33.389 Could not set queue depth (nvme0n1) 00:25:33.389 Could not set queue depth (nvme10n1) 00:25:33.389 Could not set queue depth (nvme1n1) 00:25:33.389 Could not set queue depth (nvme2n1) 00:25:33.389 Could not set queue depth (nvme3n1) 00:25:33.389 Could not set queue depth (nvme4n1) 00:25:33.389 Could not set queue depth (nvme5n1) 00:25:33.389 Could not set queue depth (nvme6n1) 00:25:33.389 Could not set queue depth (nvme7n1) 00:25:33.389 Could not set queue depth (nvme8n1) 00:25:33.389 Could not set queue depth (nvme9n1) 00:25:33.389 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:33.389 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:33.389 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:33.389 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:33.389 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:33.389 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:33.389 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:33.389 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:33.389 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:33.389 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:33.389 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:33.389 fio-3.35 00:25:33.389 Starting 11 threads 00:25:43.365 00:25:43.365 job0: (groupid=0, jobs=1): err= 0: pid=206517: Fri Jul 26 06:17:53 2024 00:25:43.365 write: IOPS=384, BW=96.2MiB/s (101MB/s)(973MiB/10111msec); 0 zone resets 00:25:43.365 slat (usec): min=24, max=98846, avg=2532.21, stdev=5356.80 00:25:43.365 clat (msec): min=30, max=367, avg=163.69, stdev=75.30 00:25:43.365 lat (msec): min=30, max=367, avg=166.22, stdev=76.26 00:25:43.365 clat percentiles (msec): 00:25:43.365 | 1.00th=[ 56], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 84], 00:25:43.365 | 30.00th=[ 108], 40.00th=[ 138], 50.00th=[ 174], 60.00th=[ 194], 00:25:43.365 | 70.00th=[ 209], 80.00th=[ 222], 90.00th=[ 253], 95.00th=[ 300], 00:25:43.365 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 363], 99.95th=[ 368], 00:25:43.365 | 99.99th=[ 368] 00:25:43.365 bw ( KiB/s): min=49250, max=264192, per=9.29%, avg=98001.70, stdev=52138.64, samples=20 00:25:43.365 iops : min= 192, max= 1032, avg=382.80, stdev=203.69, samples=20 00:25:43.365 lat (msec) : 50=0.21%, 100=25.78%, 250=63.61%, 500=10.41% 00:25:43.365 cpu : usr=1.26%, sys=1.08%, ctx=1056, majf=0, minf=1 00:25:43.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:43.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.365 issued rwts: total=0,3891,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.365 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.365 job1: (groupid=0, jobs=1): err= 0: pid=206529: Fri Jul 26 06:17:53 2024 00:25:43.365 write: IOPS=493, BW=123MiB/s (129MB/s)(1243MiB/10074msec); 0 zone resets 00:25:43.365 slat (usec): min=12, max=157478, avg=1635.45, stdev=4662.80 00:25:43.365 clat (usec): min=1880, max=452380, avg=127982.81, stdev=81186.74 00:25:43.365 lat (usec): min=1914, max=452430, avg=129618.27, stdev=82167.92 00:25:43.365 clat percentiles (msec): 00:25:43.365 | 1.00th=[ 11], 5.00th=[ 42], 10.00th=[ 53], 20.00th=[ 56], 00:25:43.365 | 30.00th=[ 65], 40.00th=[ 87], 50.00th=[ 101], 60.00th=[ 124], 00:25:43.365 | 70.00th=[ 169], 80.00th=[ 205], 90.00th=[ 239], 95.00th=[ 275], 00:25:43.365 | 99.00th=[ 380], 99.50th=[ 409], 99.90th=[ 435], 99.95th=[ 451], 00:25:43.365 | 99.99th=[ 451] 00:25:43.365 bw ( KiB/s): min=55296, max=266752, per=11.92%, avg=125696.00, stdev=61252.54, samples=20 00:25:43.365 iops : min= 216, max= 1042, avg=491.00, stdev=239.27, samples=20 00:25:43.365 lat (msec) : 2=0.02%, 4=0.36%, 10=0.60%, 20=0.82%, 50=5.13% 00:25:43.365 lat (msec) : 100=42.65%, 250=42.15%, 500=8.26% 00:25:43.365 cpu : usr=1.41%, sys=1.33%, ctx=2103, majf=0, minf=1 00:25:43.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:43.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.365 issued rwts: total=0,4973,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.365 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.365 job2: (groupid=0, jobs=1): err= 0: pid=206531: Fri Jul 26 06:17:53 2024 00:25:43.365 write: IOPS=367, BW=91.8MiB/s (96.3MB/s)(927MiB/10097msec); 0 zone resets 00:25:43.365 slat (usec): min=17, max=76916, avg=2019.66, stdev=5451.54 00:25:43.365 clat (usec): min=1869, max=367827, avg=172173.73, stdev=91652.54 00:25:43.365 lat (usec): min=1917, max=370341, avg=174193.39, stdev=92892.89 00:25:43.365 clat percentiles (msec): 00:25:43.365 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 41], 20.00th=[ 83], 00:25:43.365 | 30.00th=[ 115], 40.00th=[ 144], 50.00th=[ 192], 60.00th=[ 211], 00:25:43.365 | 70.00th=[ 226], 80.00th=[ 247], 90.00th=[ 296], 95.00th=[ 326], 00:25:43.365 | 99.00th=[ 347], 99.50th=[ 355], 99.90th=[ 363], 99.95th=[ 363], 00:25:43.365 | 99.99th=[ 368] 00:25:43.365 bw ( KiB/s): min=49152, max=177664, per=8.85%, avg=93312.00, stdev=34865.06, samples=20 00:25:43.365 iops : min= 192, max= 694, avg=364.50, stdev=136.19, samples=20 00:25:43.365 lat (msec) : 2=0.03%, 4=0.49%, 10=3.59%, 20=3.72%, 50=4.61% 00:25:43.365 lat (msec) : 100=9.74%, 250=59.12%, 500=18.72% 00:25:43.365 cpu : usr=0.97%, sys=1.20%, ctx=2067, majf=0, minf=1 00:25:43.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:25:43.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.365 issued rwts: total=0,3708,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.365 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.365 job3: (groupid=0, jobs=1): err= 0: pid=206532: Fri Jul 26 06:17:53 2024 00:25:43.365 write: IOPS=404, BW=101MiB/s (106MB/s)(1034MiB/10224msec); 0 zone resets 00:25:43.365 slat (usec): min=16, max=137982, avg=1426.80, stdev=4958.13 00:25:43.365 clat (usec): min=1132, max=405002, avg=156649.50, stdev=98130.14 00:25:43.365 lat (usec): min=1169, max=411671, avg=158076.30, stdev=99349.45 00:25:43.365 clat percentiles (msec): 00:25:43.365 | 1.00th=[ 5], 5.00th=[ 15], 10.00th=[ 30], 20.00th=[ 56], 00:25:43.365 | 30.00th=[ 85], 40.00th=[ 115], 50.00th=[ 153], 60.00th=[ 192], 00:25:43.365 | 70.00th=[ 211], 80.00th=[ 255], 90.00th=[ 292], 95.00th=[ 321], 00:25:43.365 | 99.00th=[ 388], 99.50th=[ 397], 99.90th=[ 401], 99.95th=[ 405], 00:25:43.365 | 99.99th=[ 405] 00:25:43.365 bw ( KiB/s): min=49152, max=218112, per=9.89%, avg=104275.85, stdev=43752.63, samples=20 00:25:43.365 iops : min= 192, max= 852, avg=407.30, stdev=170.93, samples=20 00:25:43.365 lat (msec) : 2=0.36%, 4=0.48%, 10=2.51%, 20=3.80%, 50=10.30% 00:25:43.365 lat (msec) : 100=17.79%, 250=43.68%, 500=21.08% 00:25:43.365 cpu : usr=1.13%, sys=1.48%, ctx=2827, majf=0, minf=1 00:25:43.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:43.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.365 issued rwts: total=0,4137,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.365 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.365 job4: (groupid=0, jobs=1): err= 0: pid=206533: Fri Jul 26 06:17:53 2024 00:25:43.365 write: IOPS=371, BW=92.8MiB/s (97.3MB/s)(943MiB/10160msec); 0 zone resets 00:25:43.365 slat (usec): min=18, max=195975, avg=2078.64, stdev=7408.17 00:25:43.365 clat (usec): min=1215, max=361865, avg=170231.58, stdev=75522.46 00:25:43.365 lat (usec): min=1249, max=361908, avg=172310.22, stdev=76392.37 00:25:43.365 clat percentiles (msec): 00:25:43.365 | 1.00th=[ 11], 5.00th=[ 39], 10.00th=[ 69], 20.00th=[ 102], 00:25:43.365 | 30.00th=[ 123], 40.00th=[ 161], 50.00th=[ 178], 60.00th=[ 192], 00:25:43.365 | 70.00th=[ 209], 80.00th=[ 234], 90.00th=[ 262], 95.00th=[ 296], 00:25:43.365 | 99.00th=[ 338], 99.50th=[ 342], 99.90th=[ 351], 99.95th=[ 363], 00:25:43.365 | 99.99th=[ 363] 00:25:43.365 bw ( KiB/s): min=68608, max=167424, per=9.00%, avg=94908.20, stdev=24648.90, samples=20 00:25:43.365 iops : min= 268, max= 654, avg=370.70, stdev=96.29, samples=20 00:25:43.365 lat (msec) : 2=0.19%, 4=0.34%, 10=0.45%, 20=1.75%, 50=4.24% 00:25:43.365 lat (msec) : 100=11.14%, 250=68.44%, 500=13.45% 00:25:43.365 cpu : usr=1.10%, sys=1.10%, ctx=1721, majf=0, minf=1 00:25:43.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:25:43.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.365 issued rwts: total=0,3770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.365 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.365 job5: (groupid=0, jobs=1): err= 0: pid=206534: Fri Jul 26 06:17:53 2024 00:25:43.365 write: IOPS=313, BW=78.5MiB/s (82.3MB/s)(802MiB/10225msec); 0 zone resets 00:25:43.365 slat (usec): min=22, max=75237, avg=2221.78, stdev=6056.44 00:25:43.365 clat (msec): min=2, max=447, avg=201.57, stdev=91.13 00:25:43.365 lat (msec): min=2, max=447, avg=203.80, stdev=92.29 00:25:43.365 clat percentiles (msec): 00:25:43.365 | 1.00th=[ 13], 5.00th=[ 34], 10.00th=[ 67], 20.00th=[ 114], 00:25:43.365 | 30.00th=[ 157], 40.00th=[ 203], 50.00th=[ 215], 60.00th=[ 230], 00:25:43.365 | 70.00th=[ 247], 80.00th=[ 271], 90.00th=[ 317], 95.00th=[ 347], 00:25:43.365 | 99.00th=[ 397], 99.50th=[ 401], 99.90th=[ 435], 99.95th=[ 447], 00:25:43.365 | 99.99th=[ 447] 00:25:43.365 bw ( KiB/s): min=40960, max=108032, per=7.63%, avg=80518.30, stdev=19905.01, samples=20 00:25:43.365 iops : min= 160, max= 422, avg=314.50, stdev=77.78, samples=20 00:25:43.365 lat (msec) : 4=0.09%, 10=0.53%, 20=1.90%, 50=4.92%, 100=8.76% 00:25:43.365 lat (msec) : 250=55.87%, 500=27.92% 00:25:43.365 cpu : usr=0.78%, sys=1.13%, ctx=1811, majf=0, minf=1 00:25:43.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:25:43.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.365 issued rwts: total=0,3209,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.365 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.365 job6: (groupid=0, jobs=1): err= 0: pid=206535: Fri Jul 26 06:17:53 2024 00:25:43.365 write: IOPS=356, BW=89.1MiB/s (93.5MB/s)(906MiB/10161msec); 0 zone resets 00:25:43.365 slat (usec): min=20, max=71405, avg=2094.43, stdev=5380.94 00:25:43.365 clat (usec): min=1926, max=374797, avg=177308.43, stdev=85601.94 00:25:43.365 lat (usec): min=1972, max=374836, avg=179402.86, stdev=86953.33 00:25:43.365 clat percentiles (msec): 00:25:43.366 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 37], 20.00th=[ 88], 00:25:43.366 | 30.00th=[ 136], 40.00th=[ 178], 50.00th=[ 201], 60.00th=[ 218], 00:25:43.366 | 70.00th=[ 228], 80.00th=[ 245], 90.00th=[ 275], 95.00th=[ 296], 00:25:43.366 | 99.00th=[ 351], 99.50th=[ 363], 99.90th=[ 376], 99.95th=[ 376], 00:25:43.366 | 99.99th=[ 376] 00:25:43.366 bw ( KiB/s): min=43008, max=172544, per=8.64%, avg=91136.00, stdev=33491.38, samples=20 00:25:43.366 iops : min= 168, max= 674, avg=356.00, stdev=130.83, samples=20 00:25:43.366 lat (msec) : 2=0.03%, 4=0.39%, 10=2.40%, 20=3.15%, 50=6.10% 00:25:43.366 lat (msec) : 100=10.90%, 250=60.03%, 500=17.00% 00:25:43.366 cpu : usr=1.02%, sys=1.10%, ctx=1977, majf=0, minf=1 00:25:43.366 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:25:43.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.366 issued rwts: total=0,3623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.366 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.366 job7: (groupid=0, jobs=1): err= 0: pid=206536: Fri Jul 26 06:17:53 2024 00:25:43.366 write: IOPS=334, BW=83.7MiB/s (87.8MB/s)(851MiB/10159msec); 0 zone resets 00:25:43.366 slat (usec): min=24, max=116993, avg=2409.97, stdev=6040.48 00:25:43.366 clat (msec): min=2, max=369, avg=188.47, stdev=74.73 00:25:43.366 lat (msec): min=2, max=369, avg=190.88, stdev=75.71 00:25:43.366 clat percentiles (msec): 00:25:43.366 | 1.00th=[ 11], 5.00th=[ 44], 10.00th=[ 85], 20.00th=[ 112], 00:25:43.366 | 30.00th=[ 159], 40.00th=[ 186], 50.00th=[ 201], 60.00th=[ 213], 00:25:43.366 | 70.00th=[ 230], 80.00th=[ 251], 90.00th=[ 275], 95.00th=[ 300], 00:25:43.366 | 99.00th=[ 338], 99.50th=[ 351], 99.90th=[ 355], 99.95th=[ 368], 00:25:43.366 | 99.99th=[ 372] 00:25:43.366 bw ( KiB/s): min=53248, max=151552, per=8.11%, avg=85510.75, stdev=25690.28, samples=20 00:25:43.366 iops : min= 208, max= 592, avg=334.00, stdev=100.37, samples=20 00:25:43.366 lat (msec) : 4=0.15%, 10=0.65%, 20=1.67%, 50=3.20%, 100=7.20% 00:25:43.366 lat (msec) : 250=66.97%, 500=20.16% 00:25:43.366 cpu : usr=1.06%, sys=0.97%, ctx=1560, majf=0, minf=1 00:25:43.366 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:25:43.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.366 issued rwts: total=0,3403,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.366 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.366 job8: (groupid=0, jobs=1): err= 0: pid=206541: Fri Jul 26 06:17:53 2024 00:25:43.366 write: IOPS=396, BW=99.0MiB/s (104MB/s)(997MiB/10066msec); 0 zone resets 00:25:43.366 slat (usec): min=18, max=153481, avg=1264.66, stdev=5670.92 00:25:43.366 clat (usec): min=1854, max=388872, avg=160160.29, stdev=83556.62 00:25:43.366 lat (usec): min=1909, max=388914, avg=161424.95, stdev=84236.12 00:25:43.366 clat percentiles (msec): 00:25:43.366 | 1.00th=[ 7], 5.00th=[ 23], 10.00th=[ 41], 20.00th=[ 67], 00:25:43.366 | 30.00th=[ 108], 40.00th=[ 150], 50.00th=[ 171], 60.00th=[ 192], 00:25:43.366 | 70.00th=[ 209], 80.00th=[ 228], 90.00th=[ 268], 95.00th=[ 296], 00:25:43.366 | 99.00th=[ 347], 99.50th=[ 351], 99.90th=[ 363], 99.95th=[ 388], 00:25:43.366 | 99.99th=[ 388] 00:25:43.366 bw ( KiB/s): min=60416, max=171350, per=9.53%, avg=100497.10, stdev=28700.91, samples=20 00:25:43.366 iops : min= 236, max= 669, avg=392.55, stdev=112.07, samples=20 00:25:43.366 lat (msec) : 2=0.05%, 4=0.35%, 10=1.38%, 20=2.58%, 50=8.45% 00:25:43.366 lat (msec) : 100=14.24%, 250=60.01%, 500=12.94% 00:25:43.366 cpu : usr=1.09%, sys=1.27%, ctx=2635, majf=0, minf=1 00:25:43.366 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:43.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.366 issued rwts: total=0,3988,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.366 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.366 job9: (groupid=0, jobs=1): err= 0: pid=206555: Fri Jul 26 06:17:53 2024 00:25:43.366 write: IOPS=362, BW=90.6MiB/s (95.0MB/s)(927MiB/10224msec); 0 zone resets 00:25:43.366 slat (usec): min=24, max=50054, avg=2339.36, stdev=5228.43 00:25:43.366 clat (msec): min=3, max=423, avg=174.11, stdev=83.77 00:25:43.366 lat (msec): min=4, max=424, avg=176.45, stdev=84.85 00:25:43.366 clat percentiles (msec): 00:25:43.366 | 1.00th=[ 11], 5.00th=[ 27], 10.00th=[ 53], 20.00th=[ 104], 00:25:43.366 | 30.00th=[ 132], 40.00th=[ 165], 50.00th=[ 178], 60.00th=[ 190], 00:25:43.366 | 70.00th=[ 220], 80.00th=[ 241], 90.00th=[ 288], 95.00th=[ 313], 00:25:43.366 | 99.00th=[ 359], 99.50th=[ 393], 99.90th=[ 422], 99.95th=[ 422], 00:25:43.366 | 99.99th=[ 426] 00:25:43.366 bw ( KiB/s): min=50176, max=180736, per=8.84%, avg=93260.80, stdev=35472.12, samples=20 00:25:43.366 iops : min= 196, max= 706, avg=364.30, stdev=138.56, samples=20 00:25:43.366 lat (msec) : 4=0.05%, 10=0.89%, 20=2.35%, 50=6.13%, 100=9.82% 00:25:43.366 lat (msec) : 250=63.44%, 500=17.32% 00:25:43.366 cpu : usr=1.06%, sys=1.24%, ctx=1635, majf=0, minf=1 00:25:43.366 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:25:43.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.366 issued rwts: total=0,3706,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.366 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.366 job10: (groupid=0, jobs=1): err= 0: pid=206564: Fri Jul 26 06:17:53 2024 00:25:43.366 write: IOPS=363, BW=90.9MiB/s (95.3MB/s)(930MiB/10226msec); 0 zone resets 00:25:43.366 slat (usec): min=20, max=102374, avg=1895.96, stdev=4992.57 00:25:43.366 clat (msec): min=3, max=450, avg=173.97, stdev=85.97 00:25:43.366 lat (msec): min=3, max=451, avg=175.87, stdev=86.91 00:25:43.366 clat percentiles (msec): 00:25:43.366 | 1.00th=[ 10], 5.00th=[ 34], 10.00th=[ 58], 20.00th=[ 96], 00:25:43.366 | 30.00th=[ 129], 40.00th=[ 165], 50.00th=[ 176], 60.00th=[ 192], 00:25:43.366 | 70.00th=[ 211], 80.00th=[ 234], 90.00th=[ 279], 95.00th=[ 330], 00:25:43.366 | 99.00th=[ 405], 99.50th=[ 422], 99.90th=[ 435], 99.95th=[ 451], 00:25:43.366 | 99.99th=[ 451] 00:25:43.366 bw ( KiB/s): min=53248, max=143872, per=8.87%, avg=93593.60, stdev=26329.37, samples=20 00:25:43.366 iops : min= 208, max= 562, avg=365.60, stdev=102.85, samples=20 00:25:43.366 lat (msec) : 4=0.08%, 10=0.97%, 20=1.51%, 50=5.59%, 100=13.18% 00:25:43.366 lat (msec) : 250=63.89%, 500=14.79% 00:25:43.366 cpu : usr=1.08%, sys=1.35%, ctx=2024, majf=0, minf=1 00:25:43.366 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:25:43.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.366 issued rwts: total=0,3719,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.366 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.366 00:25:43.366 Run status group 0 (all jobs): 00:25:43.366 WRITE: bw=1030MiB/s (1080MB/s), 78.5MiB/s-123MiB/s (82.3MB/s-129MB/s), io=10.3GiB (11.0GB), run=10066-10226msec 00:25:43.366 00:25:43.366 Disk stats (read/write): 00:25:43.366 nvme0n1: ios=49/7606, merge=0/0, ticks=67/1200328, in_queue=1200395, util=97.31% 00:25:43.366 nvme10n1: ios=46/9719, merge=0/0, ticks=223/1216971, in_queue=1217194, util=99.15% 00:25:43.366 nvme1n1: ios=13/7230, merge=0/0, ticks=21/1210739, in_queue=1210760, util=97.54% 00:25:43.366 nvme2n1: ios=0/8234, merge=0/0, ticks=0/1249046, in_queue=1249046, util=97.75% 00:25:43.366 nvme3n1: ios=50/7366, merge=0/0, ticks=4908/1147969, in_queue=1152877, util=99.97% 00:25:43.366 nvme4n1: ios=44/6377, merge=0/0, ticks=368/1242174, in_queue=1242542, util=99.86% 00:25:43.366 nvme5n1: ios=0/7069, merge=0/0, ticks=0/1212541, in_queue=1212541, util=98.23% 00:25:43.366 nvme6n1: ios=46/6633, merge=0/0, ticks=3812/1205578, in_queue=1209390, util=100.00% 00:25:43.366 nvme7n1: ios=48/7689, merge=0/0, ticks=3812/1184928, in_queue=1188740, util=100.00% 00:25:43.366 nvme8n1: ios=0/7372, merge=0/0, ticks=0/1235606, in_queue=1235606, util=98.97% 00:25:43.366 nvme9n1: ios=0/7398, merge=0/0, ticks=0/1244039, in_queue=1244039, util=99.12% 00:25:43.366 06:17:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:43.366 06:17:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:43.366 06:17:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.366 06:17:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:43.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:43.366 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:43.366 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:43.366 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:43.366 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:25:43.366 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:43.366 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:25:43.366 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:43.366 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:43.366 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.366 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.366 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.366 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.366 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:43.366 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:43.366 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:43.366 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:43.366 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:43.366 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:25:43.366 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:43.367 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:25:43.367 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:43.367 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:43.367 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.367 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.367 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.367 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.367 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:43.625 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:43.625 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:43.625 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:43.625 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:43.625 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:25:43.625 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:43.625 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:25:43.625 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:43.625 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:43.625 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.625 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.625 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.625 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.625 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:43.884 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:43.884 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:43.884 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:43.884 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:43.884 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:25:44.142 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:44.142 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:25:44.142 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:44.142 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:44.142 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.142 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.142 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.142 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.142 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:44.142 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:44.142 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:44.142 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:44.142 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:44.142 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:25:44.400 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:44.400 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:25:44.400 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:44.400 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:44.400 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.400 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.400 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.400 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.400 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:44.659 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:44.659 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:44.659 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:44.659 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:44.659 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:25:44.659 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:44.659 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:25:44.659 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:44.659 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:44.659 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.659 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.659 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.659 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.659 06:17:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:44.917 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:44.917 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:44.917 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:44.917 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:44.917 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:25:44.917 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:44.917 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:25:44.917 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:44.917 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:44.917 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.917 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.917 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.917 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.917 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:45.175 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:45.175 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:45.175 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:45.175 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:45.175 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:25:45.175 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:45.175 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:25:45.175 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:45.175 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:45.175 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.175 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:45.175 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.175 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:45.175 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:45.741 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:45.741 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:45.741 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:45.741 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:45.741 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:25:45.741 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:45.741 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:25:45.741 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:45.741 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:45.741 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.741 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:45.741 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.741 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:45.741 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:45.741 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:45.741 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:45.741 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:45.741 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:45.741 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:25:45.741 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:45.741 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:25:45.741 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:45.741 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:45.741 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.741 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:45.741 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.741 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:45.741 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:46.000 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:46.000 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:46.000 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:46.000 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:46.000 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:25:46.000 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:46.000 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:25:46.000 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:46.000 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:46.000 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.000 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:46.000 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.000 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:46.000 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:46.000 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:46.000 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:46.000 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:46.000 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:46.000 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:46.000 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:46.000 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:46.000 rmmod nvme_tcp 00:25:46.000 rmmod nvme_fabrics 00:25:46.000 rmmod nvme_keyring 00:25:46.259 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:46.259 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:46.259 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:46.259 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 201089 ']' 00:25:46.260 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 201089 00:25:46.260 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 201089 ']' 00:25:46.260 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 201089 00:25:46.260 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:25:46.260 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:46.260 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 201089 00:25:46.260 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:46.260 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:46.260 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 201089' 00:25:46.260 killing process with pid 201089 00:25:46.260 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 201089 00:25:46.260 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 201089 00:25:49.545 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:25:49.545 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:49.545 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:49.545 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:49.545 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:49.545 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:49.545 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.545 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:49.545 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.448 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:51.448 00:25:51.448 real 1m5.426s 00:25:51.448 user 3m38.411s 00:25:51.448 sys 0m22.576s 00:25:51.448 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:51.448 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.448 ************************************ 00:25:51.448 END TEST nvmf_multiconnection 00:25:51.448 ************************************ 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:51.449 ************************************ 00:25:51.449 START TEST nvmf_initiator_timeout 00:25:51.449 ************************************ 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:51.449 * Looking for test storage... 00:25:51.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:51.449 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:53.353 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:53.353 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:53.353 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:53.353 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:53.353 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:53.353 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:53.353 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:53.353 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:53.353 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:53.353 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:53.353 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:53.353 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:53.353 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:53.353 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:53.353 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:53.353 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:53.353 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:53.353 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:53.354 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:53.354 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:53.354 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:53.354 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:53.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:53.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:25:53.354 00:25:53.354 --- 10.0.0.2 ping statistics --- 00:25:53.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.354 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:53.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:53.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:25:53.354 00:25:53.354 --- 10.0.0.1 ping statistics --- 00:25:53.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.354 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:53.354 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:53.355 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:53.614 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:53.614 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:53.614 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:53.614 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:53.614 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=210371 00:25:53.614 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:53.614 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 210371 00:25:53.614 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 210371 ']' 00:25:53.614 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.614 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:53.614 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.614 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:53.614 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:53.614 [2024-07-26 06:18:04.781432] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:53.614 [2024-07-26 06:18:04.781579] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.614 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.614 [2024-07-26 06:18:04.921292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:53.874 [2024-07-26 06:18:05.184893] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:53.874 [2024-07-26 06:18:05.184970] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:53.874 [2024-07-26 06:18:05.185000] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:53.874 [2024-07-26 06:18:05.185021] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:53.874 [2024-07-26 06:18:05.185043] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:53.874 [2024-07-26 06:18:05.185174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.874 [2024-07-26 06:18:05.185247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:53.874 [2024-07-26 06:18:05.185348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.874 [2024-07-26 06:18:05.185359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:54.442 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:54.442 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:25:54.442 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:54.442 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:54.442 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.442 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:54.442 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:54.442 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:54.442 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.442 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.730 Malloc0 00:25:54.731 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.731 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:54.731 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.731 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.731 Delay0 00:25:54.731 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.731 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:54.731 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.731 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.731 [2024-07-26 06:18:05.830208] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.731 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.731 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:54.731 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.731 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.731 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.731 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:54.731 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.731 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.731 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.731 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:54.731 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.731 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.731 [2024-07-26 06:18:05.859426] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.731 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.731 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:55.298 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:55.298 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:25:55.298 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:55.298 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:55.298 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:25:57.198 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:57.198 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:57.198 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:57.198 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:57.198 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:57.198 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:25:57.198 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=211209 00:25:57.198 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:57.198 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:57.198 [global] 00:25:57.198 thread=1 00:25:57.198 invalidate=1 00:25:57.198 rw=write 00:25:57.198 time_based=1 00:25:57.198 runtime=60 00:25:57.198 ioengine=libaio 00:25:57.198 direct=1 00:25:57.198 bs=4096 00:25:57.198 iodepth=1 00:25:57.198 norandommap=0 00:25:57.198 numjobs=1 00:25:57.198 00:25:57.198 verify_dump=1 00:25:57.198 verify_backlog=512 00:25:57.198 verify_state_save=0 00:25:57.198 do_verify=1 00:25:57.198 verify=crc32c-intel 00:25:57.198 [job0] 00:25:57.198 filename=/dev/nvme0n1 00:25:57.198 Could not set queue depth (nvme0n1) 00:25:57.456 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:57.456 fio-3.35 00:25:57.456 Starting 1 thread 00:26:00.751 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:00.751 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.751 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:00.751 true 00:26:00.751 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.751 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:00.751 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.751 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:00.751 true 00:26:00.751 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.751 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:00.751 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.751 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:00.751 true 00:26:00.751 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.751 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:00.751 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.751 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:00.751 true 00:26:00.751 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.751 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:03.288 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:03.288 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.288 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:03.288 true 00:26:03.288 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.288 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:03.288 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.288 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:03.288 true 00:26:03.288 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.289 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:03.289 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.289 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:03.289 true 00:26:03.289 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.289 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:03.289 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.289 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:03.289 true 00:26:03.289 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.289 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:03.289 06:18:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 211209 00:26:59.526 00:26:59.526 job0: (groupid=0, jobs=1): err= 0: pid=211301: Fri Jul 26 06:19:08 2024 00:26:59.526 read: IOPS=84, BW=337KiB/s (346kB/s)(19.8MiB/60035msec) 00:26:59.526 slat (nsec): min=5524, max=75655, avg=12449.31, stdev=8592.19 00:26:59.526 clat (usec): min=307, max=41044k, avg=11502.93, stdev=576766.56 00:26:59.526 lat (usec): min=313, max=41044k, avg=11515.38, stdev=576766.65 00:26:59.526 clat percentiles (usec): 00:26:59.526 | 1.00th=[ 318], 5.00th=[ 326], 10.00th=[ 330], 00:26:59.526 | 20.00th=[ 343], 30.00th=[ 359], 40.00th=[ 371], 00:26:59.526 | 50.00th=[ 383], 60.00th=[ 420], 70.00th=[ 437], 00:26:59.526 | 80.00th=[ 498], 90.00th=[ 562], 95.00th=[ 41157], 00:26:59.526 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 41681], 00:26:59.526 | 99.95th=[ 42206], 99.99th=[17112761] 00:26:59.526 write: IOPS=85, BW=341KiB/s (349kB/s)(20.0MiB/60035msec); 0 zone resets 00:26:59.526 slat (nsec): min=7210, max=82780, avg=16048.51, stdev=9747.51 00:26:59.526 clat (usec): min=229, max=490, avg=309.77, stdev=48.73 00:26:59.526 lat (usec): min=237, max=553, avg=325.82, stdev=55.02 00:26:59.526 clat percentiles (usec): 00:26:59.526 | 1.00th=[ 241], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 260], 00:26:59.526 | 30.00th=[ 277], 40.00th=[ 293], 50.00th=[ 306], 60.00th=[ 318], 00:26:59.526 | 70.00th=[ 330], 80.00th=[ 343], 90.00th=[ 388], 95.00th=[ 400], 00:26:59.526 | 99.00th=[ 429], 99.50th=[ 441], 99.90th=[ 474], 99.95th=[ 486], 00:26:59.526 | 99.99th=[ 490] 00:26:59.526 bw ( KiB/s): min= 720, max= 7472, per=100.00%, avg=3723.64, stdev=1925.36, samples=11 00:26:59.526 iops : min= 180, max= 1868, avg=930.91, stdev=481.34, samples=11 00:26:59.526 lat (usec) : 250=4.36%, 500=85.92%, 750=6.04% 00:26:59.526 lat (msec) : 50=3.67%, >=2000=0.01% 00:26:59.526 cpu : usr=0.18%, sys=0.31%, ctx=10185, majf=0, minf=2 00:26:59.526 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:59.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.526 issued rwts: total=5065,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.526 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:59.526 00:26:59.526 Run status group 0 (all jobs): 00:26:59.526 READ: bw=337KiB/s (346kB/s), 337KiB/s-337KiB/s (346kB/s-346kB/s), io=19.8MiB (20.7MB), run=60035-60035msec 00:26:59.526 WRITE: bw=341KiB/s (349kB/s), 341KiB/s-341KiB/s (349kB/s-349kB/s), io=20.0MiB (21.0MB), run=60035-60035msec 00:26:59.526 00:26:59.526 Disk stats (read/write): 00:26:59.526 nvme0n1: ios=5160/5120, merge=0/0, ticks=17069/1526, in_queue=18595, util=99.78% 00:26:59.526 06:19:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:59.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:59.526 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:59.526 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:26:59.526 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:59.526 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:59.526 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:59.526 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:59.526 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:26:59.526 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:59.526 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:59.526 nvmf hotplug test: fio successful as expected 00:26:59.526 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:59.526 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.526 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.526 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.526 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:59.526 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:59.526 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:59.526 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:59.526 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:59.526 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:59.526 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:59.526 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:59.527 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:59.527 rmmod nvme_tcp 00:26:59.527 rmmod nvme_fabrics 00:26:59.527 rmmod nvme_keyring 00:26:59.527 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:59.527 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:59.527 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:59.527 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 210371 ']' 00:26:59.527 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 210371 00:26:59.527 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 210371 ']' 00:26:59.527 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 210371 00:26:59.527 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:26:59.527 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:59.527 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 210371 00:26:59.527 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:59.527 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:59.527 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 210371' 00:26:59.527 killing process with pid 210371 00:26:59.527 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 210371 00:26:59.527 06:19:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 210371 00:26:59.527 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:26:59.527 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:59.527 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:59.527 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:59.527 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:59.527 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:59.527 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.527 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:59.527 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.428 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:01.428 00:27:01.428 real 1m10.110s 00:27:01.428 user 4m15.968s 00:27:01.428 sys 0m6.933s 00:27:01.428 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:01.428 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:01.428 ************************************ 00:27:01.428 END TEST nvmf_initiator_timeout 00:27:01.428 ************************************ 00:27:01.428 06:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:27:01.428 06:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:27:01.428 06:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:27:01.428 06:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:27:01.428 06:19:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:03.998 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:03.998 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:03.999 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:03.999 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:03.999 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:03.999 ************************************ 00:27:03.999 START TEST nvmf_perf_adq 00:27:03.999 ************************************ 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:03.999 * Looking for test storage... 00:27:03.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:03.999 06:19:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:05.375 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:05.375 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:05.376 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:05.376 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:05.376 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:27:05.376 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:05.944 06:19:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:08.482 06:19:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:13.761 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:13.761 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:13.761 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:13.761 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:13.761 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:13.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:13.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:27:13.762 00:27:13.762 --- 10.0.0.2 ping statistics --- 00:27:13.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.762 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:13.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:13.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:27:13.762 00:27:13.762 --- 10.0.0.1 ping statistics --- 00:27:13.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.762 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=223141 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 223141 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 223141 ']' 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:13.762 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.762 [2024-07-26 06:19:24.473288] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:13.762 [2024-07-26 06:19:24.473437] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:13.762 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.762 [2024-07-26 06:19:24.602173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:13.762 [2024-07-26 06:19:24.859910] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:13.762 [2024-07-26 06:19:24.859990] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:13.762 [2024-07-26 06:19:24.860032] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:13.762 [2024-07-26 06:19:24.860086] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:13.762 [2024-07-26 06:19:24.860123] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:13.762 [2024-07-26 06:19:24.860255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:13.762 [2024-07-26 06:19:24.860330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:13.762 [2024-07-26 06:19:24.860389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.762 [2024-07-26 06:19:24.860393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:14.331 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:14.331 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:27:14.331 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:14.331 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:14.331 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:14.331 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:14.331 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:27:14.331 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:14.331 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:14.331 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.331 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:14.331 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.331 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:14.331 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:14.331 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.331 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:14.331 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.331 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:14.331 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.331 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:14.589 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.589 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:14.589 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.589 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:14.589 [2024-07-26 06:19:25.855478] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:14.589 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.589 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:14.589 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.589 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:14.847 Malloc1 00:27:14.847 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.847 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:14.847 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.847 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:14.847 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.847 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:14.847 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.847 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:14.847 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.847 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:14.847 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.847 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:14.847 [2024-07-26 06:19:25.960032] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:14.847 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.847 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=223309 00:27:14.847 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:27:14.847 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:14.847 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.749 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:27:16.749 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.749 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.749 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.749 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:27:16.749 "tick_rate": 2700000000, 00:27:16.749 "poll_groups": [ 00:27:16.749 { 00:27:16.749 "name": "nvmf_tgt_poll_group_000", 00:27:16.749 "admin_qpairs": 1, 00:27:16.749 "io_qpairs": 1, 00:27:16.749 "current_admin_qpairs": 1, 00:27:16.749 "current_io_qpairs": 1, 00:27:16.749 "pending_bdev_io": 0, 00:27:16.749 "completed_nvme_io": 17521, 00:27:16.749 "transports": [ 00:27:16.749 { 00:27:16.749 "trtype": "TCP" 00:27:16.749 } 00:27:16.749 ] 00:27:16.749 }, 00:27:16.749 { 00:27:16.749 "name": "nvmf_tgt_poll_group_001", 00:27:16.749 "admin_qpairs": 0, 00:27:16.749 "io_qpairs": 1, 00:27:16.749 "current_admin_qpairs": 0, 00:27:16.750 "current_io_qpairs": 1, 00:27:16.750 "pending_bdev_io": 0, 00:27:16.750 "completed_nvme_io": 15465, 00:27:16.750 "transports": [ 00:27:16.750 { 00:27:16.750 "trtype": "TCP" 00:27:16.750 } 00:27:16.750 ] 00:27:16.750 }, 00:27:16.750 { 00:27:16.750 "name": "nvmf_tgt_poll_group_002", 00:27:16.750 "admin_qpairs": 0, 00:27:16.750 "io_qpairs": 1, 00:27:16.750 "current_admin_qpairs": 0, 00:27:16.750 "current_io_qpairs": 1, 00:27:16.750 "pending_bdev_io": 0, 00:27:16.750 "completed_nvme_io": 17507, 00:27:16.750 "transports": [ 00:27:16.750 { 00:27:16.750 "trtype": "TCP" 00:27:16.750 } 00:27:16.750 ] 00:27:16.750 }, 00:27:16.750 { 00:27:16.750 "name": "nvmf_tgt_poll_group_003", 00:27:16.750 "admin_qpairs": 0, 00:27:16.750 "io_qpairs": 1, 00:27:16.750 "current_admin_qpairs": 0, 00:27:16.750 "current_io_qpairs": 1, 00:27:16.750 "pending_bdev_io": 0, 00:27:16.750 "completed_nvme_io": 15698, 00:27:16.750 "transports": [ 00:27:16.750 { 00:27:16.750 "trtype": "TCP" 00:27:16.750 } 00:27:16.750 ] 00:27:16.750 } 00:27:16.750 ] 00:27:16.750 }' 00:27:16.750 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:16.750 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:27:16.750 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:27:16.750 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:27:16.750 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 223309 00:27:24.866 Initializing NVMe Controllers 00:27:24.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:24.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:24.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:24.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:24.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:24.866 Initialization complete. Launching workers. 00:27:24.866 ======================================================== 00:27:24.866 Latency(us) 00:27:24.866 Device Information : IOPS MiB/s Average min max 00:27:24.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9598.30 37.49 6669.45 3231.72 10973.48 00:27:24.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8346.80 32.60 7668.03 2825.46 11280.15 00:27:24.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9562.40 37.35 6692.92 3778.75 9633.19 00:27:24.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8472.70 33.10 7554.46 2879.80 12076.84 00:27:24.866 ======================================================== 00:27:24.866 Total : 35980.20 140.55 7115.74 2825.46 12076.84 00:27:24.866 00:27:24.866 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:24.866 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:24.866 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:24.866 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:24.866 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:24.866 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:24.866 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:24.866 rmmod nvme_tcp 00:27:25.126 rmmod nvme_fabrics 00:27:25.126 rmmod nvme_keyring 00:27:25.126 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:25.126 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:25.126 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:25.126 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 223141 ']' 00:27:25.126 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 223141 00:27:25.126 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 223141 ']' 00:27:25.126 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 223141 00:27:25.126 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:27:25.126 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:25.126 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 223141 00:27:25.126 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:25.126 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:25.126 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 223141' 00:27:25.126 killing process with pid 223141 00:27:25.126 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 223141 00:27:25.126 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 223141 00:27:26.536 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:27:26.536 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:26.536 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:26.536 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:26.536 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:26.536 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:26.536 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.536 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:26.536 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.444 06:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:28.444 06:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:28.444 06:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:29.378 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:31.290 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:36.574 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:36.574 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:36.575 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:36.575 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:36.575 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:36.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:36.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:27:36.575 00:27:36.575 --- 10.0.0.2 ping statistics --- 00:27:36.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.575 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:36.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:36.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:27:36.575 00:27:36.575 --- 10.0.0.1 ping statistics --- 00:27:36.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.575 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:36.575 net.core.busy_poll = 1 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:36.575 net.core.busy_read = 1 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=226047 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 226047 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 226047 ']' 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:36.575 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:36.575 [2024-07-26 06:19:47.657947] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:36.575 [2024-07-26 06:19:47.658098] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:36.575 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.575 [2024-07-26 06:19:47.800509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:36.833 [2024-07-26 06:19:48.061644] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:36.833 [2024-07-26 06:19:48.061716] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:36.833 [2024-07-26 06:19:48.061759] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:36.833 [2024-07-26 06:19:48.061793] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:36.833 [2024-07-26 06:19:48.061828] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:36.833 [2024-07-26 06:19:48.061983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.833 [2024-07-26 06:19:48.062075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:36.833 [2024-07-26 06:19:48.062153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.833 [2024-07-26 06:19:48.062155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:37.401 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:37.401 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:27:37.401 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:37.401 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:37.401 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.401 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:37.401 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:37.401 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:37.401 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:37.401 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.401 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.401 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.401 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:37.401 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:37.401 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.401 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.401 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.401 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:37.401 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.401 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.661 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.661 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:37.661 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.661 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.661 [2024-07-26 06:19:48.987707] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:37.921 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.921 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:37.921 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.921 06:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.921 Malloc1 00:27:37.921 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.921 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:37.921 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.921 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.921 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.921 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:37.921 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.921 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.921 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.921 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:37.921 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.921 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.921 [2024-07-26 06:19:49.094132] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:37.921 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.921 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=226215 00:27:37.921 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:37.921 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:37.921 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.825 06:19:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:39.825 06:19:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.825 06:19:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.825 06:19:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.825 06:19:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:39.825 "tick_rate": 2700000000, 00:27:39.825 "poll_groups": [ 00:27:39.825 { 00:27:39.825 "name": "nvmf_tgt_poll_group_000", 00:27:39.825 "admin_qpairs": 1, 00:27:39.825 "io_qpairs": 3, 00:27:39.825 "current_admin_qpairs": 1, 00:27:39.825 "current_io_qpairs": 3, 00:27:39.825 "pending_bdev_io": 0, 00:27:39.825 "completed_nvme_io": 20177, 00:27:39.825 "transports": [ 00:27:39.825 { 00:27:39.825 "trtype": "TCP" 00:27:39.825 } 00:27:39.825 ] 00:27:39.825 }, 00:27:39.825 { 00:27:39.825 "name": "nvmf_tgt_poll_group_001", 00:27:39.826 "admin_qpairs": 0, 00:27:39.826 "io_qpairs": 1, 00:27:39.826 "current_admin_qpairs": 0, 00:27:39.826 "current_io_qpairs": 1, 00:27:39.826 "pending_bdev_io": 0, 00:27:39.826 "completed_nvme_io": 18574, 00:27:39.826 "transports": [ 00:27:39.826 { 00:27:39.826 "trtype": "TCP" 00:27:39.826 } 00:27:39.826 ] 00:27:39.826 }, 00:27:39.826 { 00:27:39.826 "name": "nvmf_tgt_poll_group_002", 00:27:39.826 "admin_qpairs": 0, 00:27:39.826 "io_qpairs": 0, 00:27:39.826 "current_admin_qpairs": 0, 00:27:39.826 "current_io_qpairs": 0, 00:27:39.826 "pending_bdev_io": 0, 00:27:39.826 "completed_nvme_io": 0, 00:27:39.826 "transports": [ 00:27:39.826 { 00:27:39.826 "trtype": "TCP" 00:27:39.826 } 00:27:39.826 ] 00:27:39.826 }, 00:27:39.826 { 00:27:39.826 "name": "nvmf_tgt_poll_group_003", 00:27:39.826 "admin_qpairs": 0, 00:27:39.826 "io_qpairs": 0, 00:27:39.826 "current_admin_qpairs": 0, 00:27:39.826 "current_io_qpairs": 0, 00:27:39.826 "pending_bdev_io": 0, 00:27:39.826 "completed_nvme_io": 0, 00:27:39.826 "transports": [ 00:27:39.826 { 00:27:39.826 "trtype": "TCP" 00:27:39.826 } 00:27:39.826 ] 00:27:39.826 } 00:27:39.826 ] 00:27:39.826 }' 00:27:39.826 06:19:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:39.826 06:19:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:39.826 06:19:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:39.826 06:19:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:40.084 06:19:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 226215 00:27:48.201 Initializing NVMe Controllers 00:27:48.201 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:48.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:48.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:48.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:48.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:48.201 Initialization complete. Launching workers. 00:27:48.201 ======================================================== 00:27:48.201 Latency(us) 00:27:48.201 Device Information : IOPS MiB/s Average min max 00:27:48.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10022.84 39.15 6385.17 2132.32 9615.00 00:27:48.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 3665.98 14.32 17473.37 2652.14 66128.56 00:27:48.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 3522.38 13.76 18176.23 2702.01 69844.80 00:27:48.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 3792.58 14.81 16886.50 2346.52 64831.55 00:27:48.201 ======================================================== 00:27:48.201 Total : 21003.78 82.05 12194.07 2132.32 69844.80 00:27:48.201 00:27:48.201 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:48.201 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:48.201 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:48.201 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:48.201 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:48.201 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:48.201 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:48.201 rmmod nvme_tcp 00:27:48.201 rmmod nvme_fabrics 00:27:48.201 rmmod nvme_keyring 00:27:48.201 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:48.201 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:48.201 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:48.201 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 226047 ']' 00:27:48.201 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 226047 00:27:48.201 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 226047 ']' 00:27:48.201 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 226047 00:27:48.201 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:27:48.201 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:48.201 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 226047 00:27:48.201 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:48.201 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:48.201 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 226047' 00:27:48.201 killing process with pid 226047 00:27:48.201 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 226047 00:27:48.201 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 226047 00:27:49.576 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:27:49.576 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:49.576 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:49.576 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:49.576 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:49.576 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:49.576 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.576 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:49.576 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.145 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:52.145 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:52.145 00:27:52.145 real 0m48.141s 00:27:52.145 user 2m46.635s 00:27:52.145 sys 0m12.353s 00:27:52.145 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:52.145 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.145 ************************************ 00:27:52.145 END TEST nvmf_perf_adq 00:27:52.145 ************************************ 00:27:52.145 06:20:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:52.145 06:20:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:52.145 06:20:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:52.145 06:20:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:52.145 ************************************ 00:27:52.145 START TEST nvmf_shutdown 00:27:52.145 ************************************ 00:27:52.145 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:52.145 * Looking for test storage... 00:27:52.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:52.145 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:52.145 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:52.145 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:52.145 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:52.145 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:52.145 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:52.145 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:52.145 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:52.145 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:52.145 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:52.145 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:52.145 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:52.145 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:52.145 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:52.145 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:52.145 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:52.145 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:52.145 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:52.145 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:52.145 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:52.145 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:52.145 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:52.145 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.145 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:52.146 ************************************ 00:27:52.146 START TEST nvmf_shutdown_tc1 00:27:52.146 ************************************ 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:52.146 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:54.047 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:54.048 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:54.048 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:54.048 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:54.048 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:54.048 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:54.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:54.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:27:54.048 00:27:54.048 --- 10.0.0.2 ping statistics --- 00:27:54.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.048 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:54.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:54.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:27:54.048 00:27:54.048 --- 10.0.0.1 ping statistics --- 00:27:54.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.048 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=229573 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 229573 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 229573 ']' 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:54.048 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:54.048 [2024-07-26 06:20:05.189517] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:54.048 [2024-07-26 06:20:05.189647] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:54.048 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.048 [2024-07-26 06:20:05.328029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:54.305 [2024-07-26 06:20:05.586804] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:54.305 [2024-07-26 06:20:05.586882] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:54.305 [2024-07-26 06:20:05.586910] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:54.305 [2024-07-26 06:20:05.586931] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:54.305 [2024-07-26 06:20:05.586953] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:54.306 [2024-07-26 06:20:05.587096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:54.306 [2024-07-26 06:20:05.587193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:54.306 [2024-07-26 06:20:05.587237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.306 [2024-07-26 06:20:05.587248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:54.870 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:54.870 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:27:54.870 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:54.870 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:54.870 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:54.870 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:54.870 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:54.870 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.870 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:54.870 [2024-07-26 06:20:06.159995] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:54.870 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.870 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:54.870 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:54.870 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:54.870 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:54.870 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:54.870 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:54.871 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:54.871 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:54.871 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:54.871 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:54.871 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:54.871 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:54.871 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:54.871 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:54.871 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:54.871 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:54.871 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:54.871 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:54.871 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:54.871 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:54.871 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:54.871 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:54.871 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:54.871 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:54.871 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:54.871 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:54.871 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.871 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:55.130 Malloc1 00:27:55.130 [2024-07-26 06:20:06.295786] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.130 Malloc2 00:27:55.389 Malloc3 00:27:55.390 Malloc4 00:27:55.390 Malloc5 00:27:55.649 Malloc6 00:27:55.649 Malloc7 00:27:55.908 Malloc8 00:27:55.908 Malloc9 00:27:55.908 Malloc10 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=229816 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 229816 /var/tmp/bdevperf.sock 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 229816 ']' 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:55.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.908 { 00:27:55.908 "params": { 00:27:55.908 "name": "Nvme$subsystem", 00:27:55.908 "trtype": "$TEST_TRANSPORT", 00:27:55.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.908 "adrfam": "ipv4", 00:27:55.908 "trsvcid": "$NVMF_PORT", 00:27:55.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.908 "hdgst": ${hdgst:-false}, 00:27:55.908 "ddgst": ${ddgst:-false} 00:27:55.908 }, 00:27:55.908 "method": "bdev_nvme_attach_controller" 00:27:55.908 } 00:27:55.908 EOF 00:27:55.908 )") 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.908 { 00:27:55.908 "params": { 00:27:55.908 "name": "Nvme$subsystem", 00:27:55.908 "trtype": "$TEST_TRANSPORT", 00:27:55.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.908 "adrfam": "ipv4", 00:27:55.908 "trsvcid": "$NVMF_PORT", 00:27:55.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.908 "hdgst": ${hdgst:-false}, 00:27:55.908 "ddgst": ${ddgst:-false} 00:27:55.908 }, 00:27:55.908 "method": "bdev_nvme_attach_controller" 00:27:55.908 } 00:27:55.908 EOF 00:27:55.908 )") 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.908 { 00:27:55.908 "params": { 00:27:55.908 "name": "Nvme$subsystem", 00:27:55.908 "trtype": "$TEST_TRANSPORT", 00:27:55.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.908 "adrfam": "ipv4", 00:27:55.908 "trsvcid": "$NVMF_PORT", 00:27:55.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.908 "hdgst": ${hdgst:-false}, 00:27:55.908 "ddgst": ${ddgst:-false} 00:27:55.908 }, 00:27:55.908 "method": "bdev_nvme_attach_controller" 00:27:55.908 } 00:27:55.908 EOF 00:27:55.908 )") 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.908 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.908 { 00:27:55.908 "params": { 00:27:55.908 "name": "Nvme$subsystem", 00:27:55.908 "trtype": "$TEST_TRANSPORT", 00:27:55.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.908 "adrfam": "ipv4", 00:27:55.908 "trsvcid": "$NVMF_PORT", 00:27:55.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.908 "hdgst": ${hdgst:-false}, 00:27:55.908 "ddgst": ${ddgst:-false} 00:27:55.908 }, 00:27:55.908 "method": "bdev_nvme_attach_controller" 00:27:55.908 } 00:27:55.909 EOF 00:27:55.909 )") 00:27:55.909 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:56.167 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:56.167 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:56.167 { 00:27:56.167 "params": { 00:27:56.167 "name": "Nvme$subsystem", 00:27:56.167 "trtype": "$TEST_TRANSPORT", 00:27:56.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:56.167 "adrfam": "ipv4", 00:27:56.167 "trsvcid": "$NVMF_PORT", 00:27:56.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:56.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:56.167 "hdgst": ${hdgst:-false}, 00:27:56.167 "ddgst": ${ddgst:-false} 00:27:56.167 }, 00:27:56.167 "method": "bdev_nvme_attach_controller" 00:27:56.167 } 00:27:56.167 EOF 00:27:56.167 )") 00:27:56.167 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:56.167 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:56.167 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:56.167 { 00:27:56.167 "params": { 00:27:56.167 "name": "Nvme$subsystem", 00:27:56.167 "trtype": "$TEST_TRANSPORT", 00:27:56.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:56.167 "adrfam": "ipv4", 00:27:56.167 "trsvcid": "$NVMF_PORT", 00:27:56.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:56.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:56.167 "hdgst": ${hdgst:-false}, 00:27:56.167 "ddgst": ${ddgst:-false} 00:27:56.167 }, 00:27:56.167 "method": "bdev_nvme_attach_controller" 00:27:56.167 } 00:27:56.167 EOF 00:27:56.167 )") 00:27:56.167 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:56.167 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:56.167 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:56.167 { 00:27:56.167 "params": { 00:27:56.167 "name": "Nvme$subsystem", 00:27:56.167 "trtype": "$TEST_TRANSPORT", 00:27:56.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:56.167 "adrfam": "ipv4", 00:27:56.168 "trsvcid": "$NVMF_PORT", 00:27:56.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:56.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:56.168 "hdgst": ${hdgst:-false}, 00:27:56.168 "ddgst": ${ddgst:-false} 00:27:56.168 }, 00:27:56.168 "method": "bdev_nvme_attach_controller" 00:27:56.168 } 00:27:56.168 EOF 00:27:56.168 )") 00:27:56.168 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:56.168 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:56.168 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:56.168 { 00:27:56.168 "params": { 00:27:56.168 "name": "Nvme$subsystem", 00:27:56.168 "trtype": "$TEST_TRANSPORT", 00:27:56.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:56.168 "adrfam": "ipv4", 00:27:56.168 "trsvcid": "$NVMF_PORT", 00:27:56.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:56.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:56.168 "hdgst": ${hdgst:-false}, 00:27:56.168 "ddgst": ${ddgst:-false} 00:27:56.168 }, 00:27:56.168 "method": "bdev_nvme_attach_controller" 00:27:56.168 } 00:27:56.168 EOF 00:27:56.168 )") 00:27:56.168 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:56.168 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:56.168 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:56.168 { 00:27:56.168 "params": { 00:27:56.168 "name": "Nvme$subsystem", 00:27:56.168 "trtype": "$TEST_TRANSPORT", 00:27:56.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:56.168 "adrfam": "ipv4", 00:27:56.168 "trsvcid": "$NVMF_PORT", 00:27:56.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:56.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:56.168 "hdgst": ${hdgst:-false}, 00:27:56.168 "ddgst": ${ddgst:-false} 00:27:56.168 }, 00:27:56.168 "method": "bdev_nvme_attach_controller" 00:27:56.168 } 00:27:56.168 EOF 00:27:56.168 )") 00:27:56.168 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:56.168 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:56.168 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:56.168 { 00:27:56.168 "params": { 00:27:56.168 "name": "Nvme$subsystem", 00:27:56.168 "trtype": "$TEST_TRANSPORT", 00:27:56.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:56.168 "adrfam": "ipv4", 00:27:56.168 "trsvcid": "$NVMF_PORT", 00:27:56.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:56.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:56.168 "hdgst": ${hdgst:-false}, 00:27:56.168 "ddgst": ${ddgst:-false} 00:27:56.168 }, 00:27:56.168 "method": "bdev_nvme_attach_controller" 00:27:56.168 } 00:27:56.168 EOF 00:27:56.168 )") 00:27:56.168 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:56.168 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:56.168 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:56.168 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:56.168 "params": { 00:27:56.168 "name": "Nvme1", 00:27:56.168 "trtype": "tcp", 00:27:56.168 "traddr": "10.0.0.2", 00:27:56.168 "adrfam": "ipv4", 00:27:56.168 "trsvcid": "4420", 00:27:56.168 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:56.168 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:56.168 "hdgst": false, 00:27:56.168 "ddgst": false 00:27:56.168 }, 00:27:56.168 "method": "bdev_nvme_attach_controller" 00:27:56.168 },{ 00:27:56.168 "params": { 00:27:56.168 "name": "Nvme2", 00:27:56.168 "trtype": "tcp", 00:27:56.168 "traddr": "10.0.0.2", 00:27:56.168 "adrfam": "ipv4", 00:27:56.168 "trsvcid": "4420", 00:27:56.168 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:56.168 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:56.168 "hdgst": false, 00:27:56.168 "ddgst": false 00:27:56.168 }, 00:27:56.168 "method": "bdev_nvme_attach_controller" 00:27:56.168 },{ 00:27:56.168 "params": { 00:27:56.168 "name": "Nvme3", 00:27:56.168 "trtype": "tcp", 00:27:56.168 "traddr": "10.0.0.2", 00:27:56.168 "adrfam": "ipv4", 00:27:56.168 "trsvcid": "4420", 00:27:56.168 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:56.168 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:56.168 "hdgst": false, 00:27:56.168 "ddgst": false 00:27:56.168 }, 00:27:56.168 "method": "bdev_nvme_attach_controller" 00:27:56.168 },{ 00:27:56.168 "params": { 00:27:56.168 "name": "Nvme4", 00:27:56.168 "trtype": "tcp", 00:27:56.168 "traddr": "10.0.0.2", 00:27:56.168 "adrfam": "ipv4", 00:27:56.168 "trsvcid": "4420", 00:27:56.168 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:56.168 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:56.168 "hdgst": false, 00:27:56.168 "ddgst": false 00:27:56.168 }, 00:27:56.168 "method": "bdev_nvme_attach_controller" 00:27:56.168 },{ 00:27:56.168 "params": { 00:27:56.168 "name": "Nvme5", 00:27:56.168 "trtype": "tcp", 00:27:56.168 "traddr": "10.0.0.2", 00:27:56.168 "adrfam": "ipv4", 00:27:56.168 "trsvcid": "4420", 00:27:56.168 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:56.168 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:56.168 "hdgst": false, 00:27:56.168 "ddgst": false 00:27:56.168 }, 00:27:56.168 "method": "bdev_nvme_attach_controller" 00:27:56.168 },{ 00:27:56.168 "params": { 00:27:56.168 "name": "Nvme6", 00:27:56.168 "trtype": "tcp", 00:27:56.168 "traddr": "10.0.0.2", 00:27:56.168 "adrfam": "ipv4", 00:27:56.168 "trsvcid": "4420", 00:27:56.168 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:56.168 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:56.168 "hdgst": false, 00:27:56.168 "ddgst": false 00:27:56.168 }, 00:27:56.168 "method": "bdev_nvme_attach_controller" 00:27:56.168 },{ 00:27:56.168 "params": { 00:27:56.168 "name": "Nvme7", 00:27:56.168 "trtype": "tcp", 00:27:56.168 "traddr": "10.0.0.2", 00:27:56.168 "adrfam": "ipv4", 00:27:56.168 "trsvcid": "4420", 00:27:56.168 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:56.168 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:56.168 "hdgst": false, 00:27:56.168 "ddgst": false 00:27:56.168 }, 00:27:56.168 "method": "bdev_nvme_attach_controller" 00:27:56.168 },{ 00:27:56.168 "params": { 00:27:56.168 "name": "Nvme8", 00:27:56.168 "trtype": "tcp", 00:27:56.168 "traddr": "10.0.0.2", 00:27:56.168 "adrfam": "ipv4", 00:27:56.168 "trsvcid": "4420", 00:27:56.168 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:56.168 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:56.168 "hdgst": false, 00:27:56.168 "ddgst": false 00:27:56.168 }, 00:27:56.168 "method": "bdev_nvme_attach_controller" 00:27:56.168 },{ 00:27:56.168 "params": { 00:27:56.168 "name": "Nvme9", 00:27:56.168 "trtype": "tcp", 00:27:56.168 "traddr": "10.0.0.2", 00:27:56.168 "adrfam": "ipv4", 00:27:56.168 "trsvcid": "4420", 00:27:56.168 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:56.168 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:56.168 "hdgst": false, 00:27:56.168 "ddgst": false 00:27:56.168 }, 00:27:56.168 "method": "bdev_nvme_attach_controller" 00:27:56.168 },{ 00:27:56.168 "params": { 00:27:56.168 "name": "Nvme10", 00:27:56.168 "trtype": "tcp", 00:27:56.168 "traddr": "10.0.0.2", 00:27:56.168 "adrfam": "ipv4", 00:27:56.168 "trsvcid": "4420", 00:27:56.168 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:56.168 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:56.169 "hdgst": false, 00:27:56.169 "ddgst": false 00:27:56.169 }, 00:27:56.169 "method": "bdev_nvme_attach_controller" 00:27:56.169 }' 00:27:56.169 [2024-07-26 06:20:07.310498] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:56.169 [2024-07-26 06:20:07.310637] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:56.169 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.169 [2024-07-26 06:20:07.448829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.427 [2024-07-26 06:20:07.686541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.959 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:58.959 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:27:58.959 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:58.959 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.959 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:58.959 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.959 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 229816 00:27:58.959 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:58.959 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:59.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 229816 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:59.897 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 229573 00:27:59.897 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:59.897 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:59.897 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:59.897 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:59.897 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.897 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.897 { 00:27:59.897 "params": { 00:27:59.897 "name": "Nvme$subsystem", 00:27:59.897 "trtype": "$TEST_TRANSPORT", 00:27:59.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.897 "adrfam": "ipv4", 00:27:59.897 "trsvcid": "$NVMF_PORT", 00:27:59.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.897 "hdgst": ${hdgst:-false}, 00:27:59.897 "ddgst": ${ddgst:-false} 00:27:59.897 }, 00:27:59.897 "method": "bdev_nvme_attach_controller" 00:27:59.897 } 00:27:59.897 EOF 00:27:59.897 )") 00:27:59.897 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.897 { 00:27:59.897 "params": { 00:27:59.897 "name": "Nvme$subsystem", 00:27:59.897 "trtype": "$TEST_TRANSPORT", 00:27:59.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.897 "adrfam": "ipv4", 00:27:59.897 "trsvcid": "$NVMF_PORT", 00:27:59.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.897 "hdgst": ${hdgst:-false}, 00:27:59.897 "ddgst": ${ddgst:-false} 00:27:59.897 }, 00:27:59.897 "method": "bdev_nvme_attach_controller" 00:27:59.897 } 00:27:59.897 EOF 00:27:59.897 )") 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.897 { 00:27:59.897 "params": { 00:27:59.897 "name": "Nvme$subsystem", 00:27:59.897 "trtype": "$TEST_TRANSPORT", 00:27:59.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.897 "adrfam": "ipv4", 00:27:59.897 "trsvcid": "$NVMF_PORT", 00:27:59.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.897 "hdgst": ${hdgst:-false}, 00:27:59.897 "ddgst": ${ddgst:-false} 00:27:59.897 }, 00:27:59.897 "method": "bdev_nvme_attach_controller" 00:27:59.897 } 00:27:59.897 EOF 00:27:59.897 )") 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.897 { 00:27:59.897 "params": { 00:27:59.897 "name": "Nvme$subsystem", 00:27:59.897 "trtype": "$TEST_TRANSPORT", 00:27:59.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.897 "adrfam": "ipv4", 00:27:59.897 "trsvcid": "$NVMF_PORT", 00:27:59.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.897 "hdgst": ${hdgst:-false}, 00:27:59.897 "ddgst": ${ddgst:-false} 00:27:59.897 }, 00:27:59.897 "method": "bdev_nvme_attach_controller" 00:27:59.897 } 00:27:59.897 EOF 00:27:59.897 )") 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.897 { 00:27:59.897 "params": { 00:27:59.897 "name": "Nvme$subsystem", 00:27:59.897 "trtype": "$TEST_TRANSPORT", 00:27:59.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.897 "adrfam": "ipv4", 00:27:59.897 "trsvcid": "$NVMF_PORT", 00:27:59.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.897 "hdgst": ${hdgst:-false}, 00:27:59.897 "ddgst": ${ddgst:-false} 00:27:59.897 }, 00:27:59.897 "method": "bdev_nvme_attach_controller" 00:27:59.897 } 00:27:59.897 EOF 00:27:59.897 )") 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.897 { 00:27:59.897 "params": { 00:27:59.897 "name": "Nvme$subsystem", 00:27:59.897 "trtype": "$TEST_TRANSPORT", 00:27:59.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.897 "adrfam": "ipv4", 00:27:59.897 "trsvcid": "$NVMF_PORT", 00:27:59.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.897 "hdgst": ${hdgst:-false}, 00:27:59.897 "ddgst": ${ddgst:-false} 00:27:59.897 }, 00:27:59.897 "method": "bdev_nvme_attach_controller" 00:27:59.897 } 00:27:59.897 EOF 00:27:59.897 )") 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.897 { 00:27:59.897 "params": { 00:27:59.897 "name": "Nvme$subsystem", 00:27:59.897 "trtype": "$TEST_TRANSPORT", 00:27:59.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.897 "adrfam": "ipv4", 00:27:59.897 "trsvcid": "$NVMF_PORT", 00:27:59.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.897 "hdgst": ${hdgst:-false}, 00:27:59.897 "ddgst": ${ddgst:-false} 00:27:59.897 }, 00:27:59.897 "method": "bdev_nvme_attach_controller" 00:27:59.897 } 00:27:59.897 EOF 00:27:59.897 )") 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.897 { 00:27:59.897 "params": { 00:27:59.897 "name": "Nvme$subsystem", 00:27:59.897 "trtype": "$TEST_TRANSPORT", 00:27:59.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.897 "adrfam": "ipv4", 00:27:59.897 "trsvcid": "$NVMF_PORT", 00:27:59.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.897 "hdgst": ${hdgst:-false}, 00:27:59.897 "ddgst": ${ddgst:-false} 00:27:59.897 }, 00:27:59.897 "method": "bdev_nvme_attach_controller" 00:27:59.897 } 00:27:59.897 EOF 00:27:59.897 )") 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.897 { 00:27:59.897 "params": { 00:27:59.897 "name": "Nvme$subsystem", 00:27:59.897 "trtype": "$TEST_TRANSPORT", 00:27:59.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.897 "adrfam": "ipv4", 00:27:59.897 "trsvcid": "$NVMF_PORT", 00:27:59.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.897 "hdgst": ${hdgst:-false}, 00:27:59.897 "ddgst": ${ddgst:-false} 00:27:59.897 }, 00:27:59.897 "method": "bdev_nvme_attach_controller" 00:27:59.897 } 00:27:59.897 EOF 00:27:59.897 )") 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.897 { 00:27:59.897 "params": { 00:27:59.897 "name": "Nvme$subsystem", 00:27:59.897 "trtype": "$TEST_TRANSPORT", 00:27:59.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.897 "adrfam": "ipv4", 00:27:59.897 "trsvcid": "$NVMF_PORT", 00:27:59.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.897 "hdgst": ${hdgst:-false}, 00:27:59.897 "ddgst": ${ddgst:-false} 00:27:59.897 }, 00:27:59.897 "method": "bdev_nvme_attach_controller" 00:27:59.897 } 00:27:59.897 EOF 00:27:59.897 )") 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:59.897 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:59.897 "params": { 00:27:59.897 "name": "Nvme1", 00:27:59.897 "trtype": "tcp", 00:27:59.897 "traddr": "10.0.0.2", 00:27:59.897 "adrfam": "ipv4", 00:27:59.897 "trsvcid": "4420", 00:27:59.897 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:59.897 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:59.897 "hdgst": false, 00:27:59.897 "ddgst": false 00:27:59.897 }, 00:27:59.897 "method": "bdev_nvme_attach_controller" 00:27:59.897 },{ 00:27:59.897 "params": { 00:27:59.897 "name": "Nvme2", 00:27:59.897 "trtype": "tcp", 00:27:59.897 "traddr": "10.0.0.2", 00:27:59.897 "adrfam": "ipv4", 00:27:59.897 "trsvcid": "4420", 00:27:59.897 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:59.897 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:59.897 "hdgst": false, 00:27:59.897 "ddgst": false 00:27:59.897 }, 00:27:59.897 "method": "bdev_nvme_attach_controller" 00:27:59.897 },{ 00:27:59.897 "params": { 00:27:59.897 "name": "Nvme3", 00:27:59.897 "trtype": "tcp", 00:27:59.897 "traddr": "10.0.0.2", 00:27:59.897 "adrfam": "ipv4", 00:27:59.897 "trsvcid": "4420", 00:27:59.897 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:59.897 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:59.897 "hdgst": false, 00:27:59.897 "ddgst": false 00:27:59.897 }, 00:27:59.897 "method": "bdev_nvme_attach_controller" 00:27:59.897 },{ 00:27:59.897 "params": { 00:27:59.897 "name": "Nvme4", 00:27:59.897 "trtype": "tcp", 00:27:59.897 "traddr": "10.0.0.2", 00:27:59.897 "adrfam": "ipv4", 00:27:59.897 "trsvcid": "4420", 00:27:59.897 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:59.897 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:59.897 "hdgst": false, 00:27:59.897 "ddgst": false 00:27:59.897 }, 00:27:59.897 "method": "bdev_nvme_attach_controller" 00:27:59.897 },{ 00:27:59.897 "params": { 00:27:59.897 "name": "Nvme5", 00:27:59.897 "trtype": "tcp", 00:27:59.897 "traddr": "10.0.0.2", 00:27:59.897 "adrfam": "ipv4", 00:27:59.897 "trsvcid": "4420", 00:27:59.897 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:59.897 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:59.897 "hdgst": false, 00:27:59.897 "ddgst": false 00:27:59.897 }, 00:27:59.897 "method": "bdev_nvme_attach_controller" 00:27:59.897 },{ 00:27:59.897 "params": { 00:27:59.897 "name": "Nvme6", 00:27:59.897 "trtype": "tcp", 00:27:59.897 "traddr": "10.0.0.2", 00:27:59.897 "adrfam": "ipv4", 00:27:59.897 "trsvcid": "4420", 00:27:59.897 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:59.897 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:59.897 "hdgst": false, 00:27:59.897 "ddgst": false 00:27:59.897 }, 00:27:59.897 "method": "bdev_nvme_attach_controller" 00:27:59.898 },{ 00:27:59.898 "params": { 00:27:59.898 "name": "Nvme7", 00:27:59.898 "trtype": "tcp", 00:27:59.898 "traddr": "10.0.0.2", 00:27:59.898 "adrfam": "ipv4", 00:27:59.898 "trsvcid": "4420", 00:27:59.898 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:59.898 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:59.898 "hdgst": false, 00:27:59.898 "ddgst": false 00:27:59.898 }, 00:27:59.898 "method": "bdev_nvme_attach_controller" 00:27:59.898 },{ 00:27:59.898 "params": { 00:27:59.898 "name": "Nvme8", 00:27:59.898 "trtype": "tcp", 00:27:59.898 "traddr": "10.0.0.2", 00:27:59.898 "adrfam": "ipv4", 00:27:59.898 "trsvcid": "4420", 00:27:59.898 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:59.898 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:59.898 "hdgst": false, 00:27:59.898 "ddgst": false 00:27:59.898 }, 00:27:59.898 "method": "bdev_nvme_attach_controller" 00:27:59.898 },{ 00:27:59.898 "params": { 00:27:59.898 "name": "Nvme9", 00:27:59.898 "trtype": "tcp", 00:27:59.898 "traddr": "10.0.0.2", 00:27:59.898 "adrfam": "ipv4", 00:27:59.898 "trsvcid": "4420", 00:27:59.898 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:59.898 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:59.898 "hdgst": false, 00:27:59.898 "ddgst": false 00:27:59.898 }, 00:27:59.898 "method": "bdev_nvme_attach_controller" 00:27:59.898 },{ 00:27:59.898 "params": { 00:27:59.898 "name": "Nvme10", 00:27:59.898 "trtype": "tcp", 00:27:59.898 "traddr": "10.0.0.2", 00:27:59.898 "adrfam": "ipv4", 00:27:59.898 "trsvcid": "4420", 00:27:59.898 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:59.898 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:59.898 "hdgst": false, 00:27:59.898 "ddgst": false 00:27:59.898 }, 00:27:59.898 "method": "bdev_nvme_attach_controller" 00:27:59.898 }' 00:27:59.898 [2024-07-26 06:20:11.077596] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:59.898 [2024-07-26 06:20:11.077740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid230355 ] 00:27:59.898 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.898 [2024-07-26 06:20:11.205643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.155 [2024-07-26 06:20:11.448192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.059 Running I/O for 1 seconds... 00:28:03.433 00:28:03.433 Latency(us) 00:28:03.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.433 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.433 Verification LBA range: start 0x0 length 0x400 00:28:03.433 Nvme1n1 : 1.20 213.37 13.34 0.00 0.00 296672.52 26602.76 312242.63 00:28:03.433 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.433 Verification LBA range: start 0x0 length 0x400 00:28:03.433 Nvme2n1 : 1.08 193.36 12.09 0.00 0.00 309378.00 22039.51 306028.85 00:28:03.433 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.433 Verification LBA range: start 0x0 length 0x400 00:28:03.433 Nvme3n1 : 1.11 177.13 11.07 0.00 0.00 333730.50 3616.62 318456.41 00:28:03.433 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.433 Verification LBA range: start 0x0 length 0x400 00:28:03.433 Nvme4n1 : 1.20 223.22 13.95 0.00 0.00 267317.70 6796.33 276513.37 00:28:03.433 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.433 Verification LBA range: start 0x0 length 0x400 00:28:03.433 Nvme5n1 : 1.21 211.17 13.20 0.00 0.00 278486.85 21748.24 270299.59 00:28:03.433 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.433 Verification LBA range: start 0x0 length 0x400 00:28:03.433 Nvme6n1 : 1.22 209.60 13.10 0.00 0.00 277441.99 21068.61 292047.83 00:28:03.433 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.433 Verification LBA range: start 0x0 length 0x400 00:28:03.433 Nvme7n1 : 1.18 163.15 10.20 0.00 0.00 348707.14 25826.04 321563.31 00:28:03.433 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.433 Verification LBA range: start 0x0 length 0x400 00:28:03.433 Nvme8n1 : 1.18 217.03 13.56 0.00 0.00 254815.00 20486.07 293601.28 00:28:03.434 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.434 Verification LBA range: start 0x0 length 0x400 00:28:03.434 Nvme9n1 : 1.23 208.94 13.06 0.00 0.00 263607.94 23107.51 284280.60 00:28:03.434 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.434 Verification LBA range: start 0x0 length 0x400 00:28:03.434 Nvme10n1 : 1.24 207.18 12.95 0.00 0.00 261432.51 22816.24 338651.21 00:28:03.434 =================================================================================================================== 00:28:03.434 Total : 2024.13 126.51 0.00 0.00 285950.96 3616.62 338651.21 00:28:04.367 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:28:04.367 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:28:04.367 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:04.367 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:04.367 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:04.367 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:04.367 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:04.367 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:28:04.367 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:04.367 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:28:04.367 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:04.367 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:04.367 rmmod nvme_tcp 00:28:04.367 rmmod nvme_fabrics 00:28:04.367 rmmod nvme_keyring 00:28:04.367 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:04.367 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:28:04.367 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:28:04.367 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 229573 ']' 00:28:04.367 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 229573 00:28:04.367 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 229573 ']' 00:28:04.367 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 229573 00:28:04.367 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:28:04.367 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:04.367 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 229573 00:28:04.367 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:04.367 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:04.367 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 229573' 00:28:04.367 killing process with pid 229573 00:28:04.367 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 229573 00:28:04.367 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 229573 00:28:07.655 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:28:07.655 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:07.655 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:07.655 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:07.655 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:07.655 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:07.655 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.655 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:07.655 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.638 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:09.638 00:28:09.638 real 0m17.490s 00:28:09.638 user 0m56.530s 00:28:09.638 sys 0m3.889s 00:28:09.638 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:09.638 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:09.638 ************************************ 00:28:09.638 END TEST nvmf_shutdown_tc1 00:28:09.638 ************************************ 00:28:09.638 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:09.638 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:09.638 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:09.638 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:09.638 ************************************ 00:28:09.638 START TEST nvmf_shutdown_tc2 00:28:09.638 ************************************ 00:28:09.638 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:28:09.638 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:28:09.638 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:09.638 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:09.638 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:09.638 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:09.638 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:09.638 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:09.638 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.638 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:09.638 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.638 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:09.638 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:09.639 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:09.639 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:09.639 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:09.639 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:09.639 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:09.640 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:09.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:09.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:28:09.640 00:28:09.640 --- 10.0.0.2 ping statistics --- 00:28:09.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.640 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:28:09.640 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:09.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:09.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:28:09.640 00:28:09.640 --- 10.0.0.1 ping statistics --- 00:28:09.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.640 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:28:09.640 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:09.640 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:28:09.640 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:09.640 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:09.640 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:09.640 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:09.640 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:09.640 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:09.640 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:09.640 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:09.640 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:09.640 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:09.640 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:09.640 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=231526 00:28:09.640 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:09.640 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 231526 00:28:09.640 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 231526 ']' 00:28:09.640 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.640 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:09.640 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.640 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:09.640 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:09.640 [2024-07-26 06:20:20.885384] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:09.640 [2024-07-26 06:20:20.885540] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.640 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.898 [2024-07-26 06:20:21.023717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:10.156 [2024-07-26 06:20:21.278593] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.156 [2024-07-26 06:20:21.278665] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.156 [2024-07-26 06:20:21.278693] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:10.156 [2024-07-26 06:20:21.278714] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:10.156 [2024-07-26 06:20:21.278735] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.156 [2024-07-26 06:20:21.278871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:10.156 [2024-07-26 06:20:21.278987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:10.156 [2024-07-26 06:20:21.279024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.156 [2024-07-26 06:20:21.279034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:10.722 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:10.722 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:10.722 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:10.722 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:10.722 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:10.722 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:10.722 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:10.722 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.722 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:10.722 [2024-07-26 06:20:21.822814] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.722 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.722 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:10.722 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:10.722 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:10.722 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:10.722 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:10.722 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:10.722 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:10.723 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:10.723 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:10.723 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:10.723 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:10.723 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:10.723 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:10.723 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:10.723 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:10.723 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:10.723 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:10.723 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:10.723 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:10.723 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:10.723 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:10.723 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:10.723 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:10.723 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:10.723 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:10.723 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:10.723 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.723 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:10.723 Malloc1 00:28:10.723 [2024-07-26 06:20:21.955032] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.723 Malloc2 00:28:10.982 Malloc3 00:28:10.982 Malloc4 00:28:11.242 Malloc5 00:28:11.242 Malloc6 00:28:11.242 Malloc7 00:28:11.501 Malloc8 00:28:11.501 Malloc9 00:28:11.759 Malloc10 00:28:11.759 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=231836 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 231836 /var/tmp/bdevperf.sock 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 231836 ']' 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:11.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:11.760 { 00:28:11.760 "params": { 00:28:11.760 "name": "Nvme$subsystem", 00:28:11.760 "trtype": "$TEST_TRANSPORT", 00:28:11.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.760 "adrfam": "ipv4", 00:28:11.760 "trsvcid": "$NVMF_PORT", 00:28:11.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.760 "hdgst": ${hdgst:-false}, 00:28:11.760 "ddgst": ${ddgst:-false} 00:28:11.760 }, 00:28:11.760 "method": "bdev_nvme_attach_controller" 00:28:11.760 } 00:28:11.760 EOF 00:28:11.760 )") 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:11.760 { 00:28:11.760 "params": { 00:28:11.760 "name": "Nvme$subsystem", 00:28:11.760 "trtype": "$TEST_TRANSPORT", 00:28:11.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.760 "adrfam": "ipv4", 00:28:11.760 "trsvcid": "$NVMF_PORT", 00:28:11.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.760 "hdgst": ${hdgst:-false}, 00:28:11.760 "ddgst": ${ddgst:-false} 00:28:11.760 }, 00:28:11.760 "method": "bdev_nvme_attach_controller" 00:28:11.760 } 00:28:11.760 EOF 00:28:11.760 )") 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:11.760 { 00:28:11.760 "params": { 00:28:11.760 "name": "Nvme$subsystem", 00:28:11.760 "trtype": "$TEST_TRANSPORT", 00:28:11.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.760 "adrfam": "ipv4", 00:28:11.760 "trsvcid": "$NVMF_PORT", 00:28:11.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.760 "hdgst": ${hdgst:-false}, 00:28:11.760 "ddgst": ${ddgst:-false} 00:28:11.760 }, 00:28:11.760 "method": "bdev_nvme_attach_controller" 00:28:11.760 } 00:28:11.760 EOF 00:28:11.760 )") 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:11.760 { 00:28:11.760 "params": { 00:28:11.760 "name": "Nvme$subsystem", 00:28:11.760 "trtype": "$TEST_TRANSPORT", 00:28:11.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.760 "adrfam": "ipv4", 00:28:11.760 "trsvcid": "$NVMF_PORT", 00:28:11.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.760 "hdgst": ${hdgst:-false}, 00:28:11.760 "ddgst": ${ddgst:-false} 00:28:11.760 }, 00:28:11.760 "method": "bdev_nvme_attach_controller" 00:28:11.760 } 00:28:11.760 EOF 00:28:11.760 )") 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:11.760 { 00:28:11.760 "params": { 00:28:11.760 "name": "Nvme$subsystem", 00:28:11.760 "trtype": "$TEST_TRANSPORT", 00:28:11.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.760 "adrfam": "ipv4", 00:28:11.760 "trsvcid": "$NVMF_PORT", 00:28:11.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.760 "hdgst": ${hdgst:-false}, 00:28:11.760 "ddgst": ${ddgst:-false} 00:28:11.760 }, 00:28:11.760 "method": "bdev_nvme_attach_controller" 00:28:11.760 } 00:28:11.760 EOF 00:28:11.760 )") 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:11.760 { 00:28:11.760 "params": { 00:28:11.760 "name": "Nvme$subsystem", 00:28:11.760 "trtype": "$TEST_TRANSPORT", 00:28:11.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.760 "adrfam": "ipv4", 00:28:11.760 "trsvcid": "$NVMF_PORT", 00:28:11.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.760 "hdgst": ${hdgst:-false}, 00:28:11.760 "ddgst": ${ddgst:-false} 00:28:11.760 }, 00:28:11.760 "method": "bdev_nvme_attach_controller" 00:28:11.760 } 00:28:11.760 EOF 00:28:11.760 )") 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:11.760 { 00:28:11.760 "params": { 00:28:11.760 "name": "Nvme$subsystem", 00:28:11.760 "trtype": "$TEST_TRANSPORT", 00:28:11.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.760 "adrfam": "ipv4", 00:28:11.760 "trsvcid": "$NVMF_PORT", 00:28:11.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.760 "hdgst": ${hdgst:-false}, 00:28:11.760 "ddgst": ${ddgst:-false} 00:28:11.760 }, 00:28:11.760 "method": "bdev_nvme_attach_controller" 00:28:11.760 } 00:28:11.760 EOF 00:28:11.760 )") 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:11.760 { 00:28:11.760 "params": { 00:28:11.760 "name": "Nvme$subsystem", 00:28:11.760 "trtype": "$TEST_TRANSPORT", 00:28:11.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.760 "adrfam": "ipv4", 00:28:11.760 "trsvcid": "$NVMF_PORT", 00:28:11.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.760 "hdgst": ${hdgst:-false}, 00:28:11.760 "ddgst": ${ddgst:-false} 00:28:11.760 }, 00:28:11.760 "method": "bdev_nvme_attach_controller" 00:28:11.760 } 00:28:11.760 EOF 00:28:11.760 )") 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:11.760 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:11.760 { 00:28:11.760 "params": { 00:28:11.760 "name": "Nvme$subsystem", 00:28:11.760 "trtype": "$TEST_TRANSPORT", 00:28:11.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.760 "adrfam": "ipv4", 00:28:11.760 "trsvcid": "$NVMF_PORT", 00:28:11.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.760 "hdgst": ${hdgst:-false}, 00:28:11.761 "ddgst": ${ddgst:-false} 00:28:11.761 }, 00:28:11.761 "method": "bdev_nvme_attach_controller" 00:28:11.761 } 00:28:11.761 EOF 00:28:11.761 )") 00:28:11.761 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:11.761 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:11.761 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:11.761 { 00:28:11.761 "params": { 00:28:11.761 "name": "Nvme$subsystem", 00:28:11.761 "trtype": "$TEST_TRANSPORT", 00:28:11.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.761 "adrfam": "ipv4", 00:28:11.761 "trsvcid": "$NVMF_PORT", 00:28:11.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.761 "hdgst": ${hdgst:-false}, 00:28:11.761 "ddgst": ${ddgst:-false} 00:28:11.761 }, 00:28:11.761 "method": "bdev_nvme_attach_controller" 00:28:11.761 } 00:28:11.761 EOF 00:28:11.761 )") 00:28:11.761 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:11.761 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:28:11.761 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:28:11.761 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:11.761 "params": { 00:28:11.761 "name": "Nvme1", 00:28:11.761 "trtype": "tcp", 00:28:11.761 "traddr": "10.0.0.2", 00:28:11.761 "adrfam": "ipv4", 00:28:11.761 "trsvcid": "4420", 00:28:11.761 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:11.761 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:11.761 "hdgst": false, 00:28:11.761 "ddgst": false 00:28:11.761 }, 00:28:11.761 "method": "bdev_nvme_attach_controller" 00:28:11.761 },{ 00:28:11.761 "params": { 00:28:11.761 "name": "Nvme2", 00:28:11.761 "trtype": "tcp", 00:28:11.761 "traddr": "10.0.0.2", 00:28:11.761 "adrfam": "ipv4", 00:28:11.761 "trsvcid": "4420", 00:28:11.761 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:11.761 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:11.761 "hdgst": false, 00:28:11.761 "ddgst": false 00:28:11.761 }, 00:28:11.761 "method": "bdev_nvme_attach_controller" 00:28:11.761 },{ 00:28:11.761 "params": { 00:28:11.761 "name": "Nvme3", 00:28:11.761 "trtype": "tcp", 00:28:11.761 "traddr": "10.0.0.2", 00:28:11.761 "adrfam": "ipv4", 00:28:11.761 "trsvcid": "4420", 00:28:11.761 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:11.761 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:11.761 "hdgst": false, 00:28:11.761 "ddgst": false 00:28:11.761 }, 00:28:11.761 "method": "bdev_nvme_attach_controller" 00:28:11.761 },{ 00:28:11.761 "params": { 00:28:11.761 "name": "Nvme4", 00:28:11.761 "trtype": "tcp", 00:28:11.761 "traddr": "10.0.0.2", 00:28:11.761 "adrfam": "ipv4", 00:28:11.761 "trsvcid": "4420", 00:28:11.761 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:11.761 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:11.761 "hdgst": false, 00:28:11.761 "ddgst": false 00:28:11.761 }, 00:28:11.761 "method": "bdev_nvme_attach_controller" 00:28:11.761 },{ 00:28:11.761 "params": { 00:28:11.761 "name": "Nvme5", 00:28:11.761 "trtype": "tcp", 00:28:11.761 "traddr": "10.0.0.2", 00:28:11.761 "adrfam": "ipv4", 00:28:11.761 "trsvcid": "4420", 00:28:11.761 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:11.761 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:11.761 "hdgst": false, 00:28:11.761 "ddgst": false 00:28:11.761 }, 00:28:11.761 "method": "bdev_nvme_attach_controller" 00:28:11.761 },{ 00:28:11.761 "params": { 00:28:11.761 "name": "Nvme6", 00:28:11.761 "trtype": "tcp", 00:28:11.761 "traddr": "10.0.0.2", 00:28:11.761 "adrfam": "ipv4", 00:28:11.761 "trsvcid": "4420", 00:28:11.761 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:11.761 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:11.761 "hdgst": false, 00:28:11.761 "ddgst": false 00:28:11.761 }, 00:28:11.761 "method": "bdev_nvme_attach_controller" 00:28:11.761 },{ 00:28:11.761 "params": { 00:28:11.761 "name": "Nvme7", 00:28:11.761 "trtype": "tcp", 00:28:11.761 "traddr": "10.0.0.2", 00:28:11.761 "adrfam": "ipv4", 00:28:11.761 "trsvcid": "4420", 00:28:11.761 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:11.761 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:11.761 "hdgst": false, 00:28:11.761 "ddgst": false 00:28:11.761 }, 00:28:11.761 "method": "bdev_nvme_attach_controller" 00:28:11.761 },{ 00:28:11.761 "params": { 00:28:11.761 "name": "Nvme8", 00:28:11.761 "trtype": "tcp", 00:28:11.761 "traddr": "10.0.0.2", 00:28:11.761 "adrfam": "ipv4", 00:28:11.761 "trsvcid": "4420", 00:28:11.761 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:11.761 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:11.761 "hdgst": false, 00:28:11.761 "ddgst": false 00:28:11.761 }, 00:28:11.761 "method": "bdev_nvme_attach_controller" 00:28:11.761 },{ 00:28:11.761 "params": { 00:28:11.761 "name": "Nvme9", 00:28:11.761 "trtype": "tcp", 00:28:11.761 "traddr": "10.0.0.2", 00:28:11.761 "adrfam": "ipv4", 00:28:11.761 "trsvcid": "4420", 00:28:11.761 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:11.761 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:11.761 "hdgst": false, 00:28:11.761 "ddgst": false 00:28:11.761 }, 00:28:11.761 "method": "bdev_nvme_attach_controller" 00:28:11.761 },{ 00:28:11.761 "params": { 00:28:11.761 "name": "Nvme10", 00:28:11.761 "trtype": "tcp", 00:28:11.761 "traddr": "10.0.0.2", 00:28:11.761 "adrfam": "ipv4", 00:28:11.761 "trsvcid": "4420", 00:28:11.761 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:11.761 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:11.761 "hdgst": false, 00:28:11.761 "ddgst": false 00:28:11.761 }, 00:28:11.761 "method": "bdev_nvme_attach_controller" 00:28:11.761 }' 00:28:11.761 [2024-07-26 06:20:22.974012] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:11.761 [2024-07-26 06:20:22.974202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid231836 ] 00:28:11.761 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.020 [2024-07-26 06:20:23.109140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.020 [2024-07-26 06:20:23.352863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.567 Running I/O for 10 seconds... 00:28:14.567 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:14.567 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:14.567 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:14.567 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.567 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:14.567 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.567 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:14.567 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:14.567 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:14.567 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:28:14.567 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:28:14.567 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:14.567 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:14.567 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:14.567 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:14.567 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.567 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:14.567 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.567 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:14.567 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:14.567 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:14.826 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:14.826 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:14.826 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:14.826 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:14.826 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.826 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:14.826 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.826 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:14.826 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:14.826 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:15.085 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:15.085 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:15.085 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:15.085 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.085 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:15.085 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:15.085 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.085 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:15.085 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:15.085 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:28:15.085 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:28:15.085 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:28:15.085 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 231836 00:28:15.085 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 231836 ']' 00:28:15.085 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 231836 00:28:15.085 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:28:15.085 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:15.085 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 231836 00:28:15.085 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:15.085 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:15.085 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 231836' 00:28:15.085 killing process with pid 231836 00:28:15.085 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 231836 00:28:15.085 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 231836 00:28:15.344 Received shutdown signal, test time was about 1.004928 seconds 00:28:15.344 00:28:15.344 Latency(us) 00:28:15.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.344 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.344 Verification LBA range: start 0x0 length 0x400 00:28:15.344 Nvme1n1 : 0.95 201.79 12.61 0.00 0.00 311981.45 21554.06 248551.35 00:28:15.344 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.344 Verification LBA range: start 0x0 length 0x400 00:28:15.344 Nvme2n1 : 0.98 195.49 12.22 0.00 0.00 316640.52 24758.04 301368.51 00:28:15.344 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.344 Verification LBA range: start 0x0 length 0x400 00:28:15.344 Nvme3n1 : 0.94 204.40 12.78 0.00 0.00 294527.94 21845.33 304475.40 00:28:15.344 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.344 Verification LBA range: start 0x0 length 0x400 00:28:15.344 Nvme4n1 : 0.96 200.01 12.50 0.00 0.00 295156.24 41943.04 299815.06 00:28:15.344 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.344 Verification LBA range: start 0x0 length 0x400 00:28:15.344 Nvme5n1 : 0.97 198.27 12.39 0.00 0.00 292237.46 25631.86 295154.73 00:28:15.344 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.344 Verification LBA range: start 0x0 length 0x400 00:28:15.344 Nvme6n1 : 0.96 205.00 12.81 0.00 0.00 273163.26 11747.93 273406.48 00:28:15.344 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.344 Verification LBA range: start 0x0 length 0x400 00:28:15.344 Nvme7n1 : 0.98 196.91 12.31 0.00 0.00 281276.62 23787.14 298261.62 00:28:15.344 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.344 Verification LBA range: start 0x0 length 0x400 00:28:15.344 Nvme8n1 : 1.00 192.95 12.06 0.00 0.00 281403.04 23884.23 318456.41 00:28:15.344 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.344 Verification LBA range: start 0x0 length 0x400 00:28:15.344 Nvme9n1 : 0.99 198.03 12.38 0.00 0.00 266909.23 2063.17 295154.73 00:28:15.344 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.344 Verification LBA range: start 0x0 length 0x400 00:28:15.344 Nvme10n1 : 1.00 191.23 11.95 0.00 0.00 271317.78 21748.24 344865.00 00:28:15.344 =================================================================================================================== 00:28:15.344 Total : 1984.09 124.01 0.00 0.00 288384.90 2063.17 344865.00 00:28:16.278 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:28:16.537 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 231526 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:17.476 rmmod nvme_tcp 00:28:17.476 rmmod nvme_fabrics 00:28:17.476 rmmod nvme_keyring 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 231526 ']' 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 231526 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 231526 ']' 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 231526 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 231526 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 231526' 00:28:17.476 killing process with pid 231526 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 231526 00:28:17.476 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 231526 00:28:20.763 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:28:20.763 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:20.763 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:20.763 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:20.763 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:20.763 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:20.763 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.763 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:20.763 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:22.670 00:28:22.670 real 0m13.036s 00:28:22.670 user 0m43.910s 00:28:22.670 sys 0m2.038s 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:22.670 ************************************ 00:28:22.670 END TEST nvmf_shutdown_tc2 00:28:22.670 ************************************ 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:22.670 ************************************ 00:28:22.670 START TEST nvmf_shutdown_tc3 00:28:22.670 ************************************ 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:22.670 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.670 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:22.671 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:22.671 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:22.671 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:22.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:28:22.671 00:28:22.671 --- 10.0.0.2 ping statistics --- 00:28:22.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.671 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:28:22.671 00:28:22.671 --- 10.0.0.1 ping statistics --- 00:28:22.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.671 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=233272 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 233272 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 233272 ']' 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:22.671 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:22.671 [2024-07-26 06:20:34.002014] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:22.671 [2024-07-26 06:20:34.002190] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.931 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.931 [2024-07-26 06:20:34.149805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:23.189 [2024-07-26 06:20:34.412560] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:23.189 [2024-07-26 06:20:34.412631] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:23.189 [2024-07-26 06:20:34.412660] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:23.189 [2024-07-26 06:20:34.412682] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:23.189 [2024-07-26 06:20:34.412703] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:23.189 [2024-07-26 06:20:34.412845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:23.189 [2024-07-26 06:20:34.412956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:23.189 [2024-07-26 06:20:34.412996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.189 [2024-07-26 06:20:34.413006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:23.756 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:23.756 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:28:23.756 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:23.756 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:23.756 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:23.756 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:23.756 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:23.756 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.756 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:23.756 [2024-07-26 06:20:34.980081] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:23.756 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.756 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:23.756 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:23.756 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:23.756 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:23.756 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:23.756 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:23.756 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:23.756 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:23.756 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:23.756 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:23.756 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:23.756 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:23.756 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:23.756 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:23.756 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:23.756 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:23.756 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:23.756 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:23.756 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:23.756 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:23.756 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:23.756 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:23.756 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:23.756 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:23.756 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:23.756 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:23.756 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.756 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:23.756 Malloc1 00:28:24.014 [2024-07-26 06:20:35.107706] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.014 Malloc2 00:28:24.014 Malloc3 00:28:24.272 Malloc4 00:28:24.272 Malloc5 00:28:24.272 Malloc6 00:28:24.534 Malloc7 00:28:24.534 Malloc8 00:28:24.801 Malloc9 00:28:24.801 Malloc10 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=233585 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 233585 /var/tmp/bdevperf.sock 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 233585 ']' 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:24.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:24.801 { 00:28:24.801 "params": { 00:28:24.801 "name": "Nvme$subsystem", 00:28:24.801 "trtype": "$TEST_TRANSPORT", 00:28:24.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.801 "adrfam": "ipv4", 00:28:24.801 "trsvcid": "$NVMF_PORT", 00:28:24.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.801 "hdgst": ${hdgst:-false}, 00:28:24.801 "ddgst": ${ddgst:-false} 00:28:24.801 }, 00:28:24.801 "method": "bdev_nvme_attach_controller" 00:28:24.801 } 00:28:24.801 EOF 00:28:24.801 )") 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:24.801 { 00:28:24.801 "params": { 00:28:24.801 "name": "Nvme$subsystem", 00:28:24.801 "trtype": "$TEST_TRANSPORT", 00:28:24.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.801 "adrfam": "ipv4", 00:28:24.801 "trsvcid": "$NVMF_PORT", 00:28:24.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.801 "hdgst": ${hdgst:-false}, 00:28:24.801 "ddgst": ${ddgst:-false} 00:28:24.801 }, 00:28:24.801 "method": "bdev_nvme_attach_controller" 00:28:24.801 } 00:28:24.801 EOF 00:28:24.801 )") 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:24.801 { 00:28:24.801 "params": { 00:28:24.801 "name": "Nvme$subsystem", 00:28:24.801 "trtype": "$TEST_TRANSPORT", 00:28:24.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.801 "adrfam": "ipv4", 00:28:24.801 "trsvcid": "$NVMF_PORT", 00:28:24.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.801 "hdgst": ${hdgst:-false}, 00:28:24.801 "ddgst": ${ddgst:-false} 00:28:24.801 }, 00:28:24.801 "method": "bdev_nvme_attach_controller" 00:28:24.801 } 00:28:24.801 EOF 00:28:24.801 )") 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:24.801 { 00:28:24.801 "params": { 00:28:24.801 "name": "Nvme$subsystem", 00:28:24.801 "trtype": "$TEST_TRANSPORT", 00:28:24.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.801 "adrfam": "ipv4", 00:28:24.801 "trsvcid": "$NVMF_PORT", 00:28:24.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.801 "hdgst": ${hdgst:-false}, 00:28:24.801 "ddgst": ${ddgst:-false} 00:28:24.801 }, 00:28:24.801 "method": "bdev_nvme_attach_controller" 00:28:24.801 } 00:28:24.801 EOF 00:28:24.801 )") 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:24.801 { 00:28:24.801 "params": { 00:28:24.801 "name": "Nvme$subsystem", 00:28:24.801 "trtype": "$TEST_TRANSPORT", 00:28:24.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.801 "adrfam": "ipv4", 00:28:24.801 "trsvcid": "$NVMF_PORT", 00:28:24.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.801 "hdgst": ${hdgst:-false}, 00:28:24.801 "ddgst": ${ddgst:-false} 00:28:24.801 }, 00:28:24.801 "method": "bdev_nvme_attach_controller" 00:28:24.801 } 00:28:24.801 EOF 00:28:24.801 )") 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:24.801 { 00:28:24.801 "params": { 00:28:24.801 "name": "Nvme$subsystem", 00:28:24.801 "trtype": "$TEST_TRANSPORT", 00:28:24.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.801 "adrfam": "ipv4", 00:28:24.801 "trsvcid": "$NVMF_PORT", 00:28:24.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.801 "hdgst": ${hdgst:-false}, 00:28:24.801 "ddgst": ${ddgst:-false} 00:28:24.801 }, 00:28:24.801 "method": "bdev_nvme_attach_controller" 00:28:24.801 } 00:28:24.801 EOF 00:28:24.801 )") 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:24.801 { 00:28:24.801 "params": { 00:28:24.801 "name": "Nvme$subsystem", 00:28:24.801 "trtype": "$TEST_TRANSPORT", 00:28:24.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.801 "adrfam": "ipv4", 00:28:24.801 "trsvcid": "$NVMF_PORT", 00:28:24.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.801 "hdgst": ${hdgst:-false}, 00:28:24.801 "ddgst": ${ddgst:-false} 00:28:24.801 }, 00:28:24.801 "method": "bdev_nvme_attach_controller" 00:28:24.801 } 00:28:24.801 EOF 00:28:24.801 )") 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:24.801 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:24.801 { 00:28:24.801 "params": { 00:28:24.801 "name": "Nvme$subsystem", 00:28:24.801 "trtype": "$TEST_TRANSPORT", 00:28:24.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.802 "adrfam": "ipv4", 00:28:24.802 "trsvcid": "$NVMF_PORT", 00:28:24.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.802 "hdgst": ${hdgst:-false}, 00:28:24.802 "ddgst": ${ddgst:-false} 00:28:24.802 }, 00:28:24.802 "method": "bdev_nvme_attach_controller" 00:28:24.802 } 00:28:24.802 EOF 00:28:24.802 )") 00:28:24.802 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:24.802 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:24.802 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:24.802 { 00:28:24.802 "params": { 00:28:24.802 "name": "Nvme$subsystem", 00:28:24.802 "trtype": "$TEST_TRANSPORT", 00:28:24.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.802 "adrfam": "ipv4", 00:28:24.802 "trsvcid": "$NVMF_PORT", 00:28:24.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.802 "hdgst": ${hdgst:-false}, 00:28:24.802 "ddgst": ${ddgst:-false} 00:28:24.802 }, 00:28:24.802 "method": "bdev_nvme_attach_controller" 00:28:24.802 } 00:28:24.802 EOF 00:28:24.802 )") 00:28:24.802 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:24.802 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:24.802 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:24.802 { 00:28:24.802 "params": { 00:28:24.802 "name": "Nvme$subsystem", 00:28:24.802 "trtype": "$TEST_TRANSPORT", 00:28:24.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.802 "adrfam": "ipv4", 00:28:24.802 "trsvcid": "$NVMF_PORT", 00:28:24.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.802 "hdgst": ${hdgst:-false}, 00:28:24.802 "ddgst": ${ddgst:-false} 00:28:24.802 }, 00:28:24.802 "method": "bdev_nvme_attach_controller" 00:28:24.802 } 00:28:24.802 EOF 00:28:24.802 )") 00:28:24.802 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:24.802 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:28:24.802 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:28:24.802 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:24.802 "params": { 00:28:24.802 "name": "Nvme1", 00:28:24.802 "trtype": "tcp", 00:28:24.802 "traddr": "10.0.0.2", 00:28:24.802 "adrfam": "ipv4", 00:28:24.802 "trsvcid": "4420", 00:28:24.802 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:24.802 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:24.802 "hdgst": false, 00:28:24.802 "ddgst": false 00:28:24.802 }, 00:28:24.802 "method": "bdev_nvme_attach_controller" 00:28:24.802 },{ 00:28:24.802 "params": { 00:28:24.802 "name": "Nvme2", 00:28:24.802 "trtype": "tcp", 00:28:24.802 "traddr": "10.0.0.2", 00:28:24.802 "adrfam": "ipv4", 00:28:24.802 "trsvcid": "4420", 00:28:24.802 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:24.802 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:24.802 "hdgst": false, 00:28:24.802 "ddgst": false 00:28:24.802 }, 00:28:24.802 "method": "bdev_nvme_attach_controller" 00:28:24.802 },{ 00:28:24.802 "params": { 00:28:24.802 "name": "Nvme3", 00:28:24.802 "trtype": "tcp", 00:28:24.802 "traddr": "10.0.0.2", 00:28:24.802 "adrfam": "ipv4", 00:28:24.802 "trsvcid": "4420", 00:28:24.802 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:24.802 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:24.802 "hdgst": false, 00:28:24.802 "ddgst": false 00:28:24.802 }, 00:28:24.802 "method": "bdev_nvme_attach_controller" 00:28:24.802 },{ 00:28:24.802 "params": { 00:28:24.802 "name": "Nvme4", 00:28:24.802 "trtype": "tcp", 00:28:24.802 "traddr": "10.0.0.2", 00:28:24.802 "adrfam": "ipv4", 00:28:24.802 "trsvcid": "4420", 00:28:24.802 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:24.802 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:24.802 "hdgst": false, 00:28:24.802 "ddgst": false 00:28:24.802 }, 00:28:24.802 "method": "bdev_nvme_attach_controller" 00:28:24.802 },{ 00:28:24.802 "params": { 00:28:24.802 "name": "Nvme5", 00:28:24.802 "trtype": "tcp", 00:28:24.802 "traddr": "10.0.0.2", 00:28:24.802 "adrfam": "ipv4", 00:28:24.802 "trsvcid": "4420", 00:28:24.802 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:24.802 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:24.802 "hdgst": false, 00:28:24.802 "ddgst": false 00:28:24.802 }, 00:28:24.802 "method": "bdev_nvme_attach_controller" 00:28:24.802 },{ 00:28:24.802 "params": { 00:28:24.802 "name": "Nvme6", 00:28:24.802 "trtype": "tcp", 00:28:24.802 "traddr": "10.0.0.2", 00:28:24.802 "adrfam": "ipv4", 00:28:24.802 "trsvcid": "4420", 00:28:24.802 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:24.802 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:24.802 "hdgst": false, 00:28:24.802 "ddgst": false 00:28:24.802 }, 00:28:24.802 "method": "bdev_nvme_attach_controller" 00:28:24.802 },{ 00:28:24.802 "params": { 00:28:24.802 "name": "Nvme7", 00:28:24.802 "trtype": "tcp", 00:28:24.802 "traddr": "10.0.0.2", 00:28:24.802 "adrfam": "ipv4", 00:28:24.802 "trsvcid": "4420", 00:28:24.802 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:24.802 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:24.802 "hdgst": false, 00:28:24.802 "ddgst": false 00:28:24.802 }, 00:28:24.802 "method": "bdev_nvme_attach_controller" 00:28:24.802 },{ 00:28:24.802 "params": { 00:28:24.802 "name": "Nvme8", 00:28:24.802 "trtype": "tcp", 00:28:24.802 "traddr": "10.0.0.2", 00:28:24.802 "adrfam": "ipv4", 00:28:24.802 "trsvcid": "4420", 00:28:24.802 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:24.802 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:24.802 "hdgst": false, 00:28:24.802 "ddgst": false 00:28:24.802 }, 00:28:24.802 "method": "bdev_nvme_attach_controller" 00:28:24.802 },{ 00:28:24.802 "params": { 00:28:24.802 "name": "Nvme9", 00:28:24.802 "trtype": "tcp", 00:28:24.802 "traddr": "10.0.0.2", 00:28:24.802 "adrfam": "ipv4", 00:28:24.802 "trsvcid": "4420", 00:28:24.802 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:24.802 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:24.802 "hdgst": false, 00:28:24.802 "ddgst": false 00:28:24.802 }, 00:28:24.802 "method": "bdev_nvme_attach_controller" 00:28:24.802 },{ 00:28:24.802 "params": { 00:28:24.802 "name": "Nvme10", 00:28:24.802 "trtype": "tcp", 00:28:24.802 "traddr": "10.0.0.2", 00:28:24.802 "adrfam": "ipv4", 00:28:24.802 "trsvcid": "4420", 00:28:24.802 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:24.802 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:24.802 "hdgst": false, 00:28:24.802 "ddgst": false 00:28:24.802 }, 00:28:24.802 "method": "bdev_nvme_attach_controller" 00:28:24.802 }' 00:28:24.802 [2024-07-26 06:20:36.122017] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:24.802 [2024-07-26 06:20:36.122208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid233585 ] 00:28:25.062 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.062 [2024-07-26 06:20:36.248170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.320 [2024-07-26 06:20:36.492722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.217 Running I/O for 10 seconds... 00:28:27.783 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:27.783 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:28:27.783 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:27.783 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.783 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:27.783 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.783 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:27.783 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:27.783 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:27.783 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:27.783 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:28:27.783 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:28:27.783 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:27.783 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:27.783 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:27.783 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:27.783 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.783 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:27.783 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.783 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:27.783 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:27.783 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:27.783 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:27.783 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:28.056 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:28.056 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.056 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:28.056 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:28.056 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.056 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:28.056 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:28.056 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:28:28.056 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:28:28.056 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:28:28.056 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 233272 00:28:28.056 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 233272 ']' 00:28:28.056 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 233272 00:28:28.056 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:28:28.056 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:28.056 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 233272 00:28:28.056 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:28.056 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:28.056 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 233272' 00:28:28.056 killing process with pid 233272 00:28:28.056 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 233272 00:28:28.056 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 233272 00:28:28.056 [2024-07-26 06:20:39.179113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.179999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.180017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.180034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.180070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.180091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.180109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.180126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.180144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.180162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.180180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.180198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.180216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.180234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.180252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.180270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.056 [2024-07-26 06:20:39.180288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.180305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.180323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.180340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.180365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.184730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.184771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.184815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.184835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.184853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.184871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.184889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.184908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.184926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.184944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.184962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.184980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.184998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185101] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.185918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.189175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.189216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.189238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.189257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.189275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.189293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.189311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.189330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.189350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.189368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.189387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.189414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.057 [2024-07-26 06:20:39.189432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.189990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.190008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.190030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.190048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.190075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.190095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.190113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.190131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.190149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.190167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.190185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.190202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.190220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.190238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.190256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.190273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.190291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.190309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.190327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.190345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.190363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.192667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.192709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.192731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.192750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.192768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.192786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.192803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.192827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.192846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.192864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.192882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.192899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.192916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.192933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.192950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.192968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.192985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.193003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.193021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.193039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.193056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.193085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.193103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.193120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.193138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.193155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.193172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.058 [2024-07-26 06:20:39.193190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:1[2024-07-26 06:20:39.193502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.059 with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 06:20:39.193544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.059 with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.059 [2024-07-26 06:20:39.193610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.059 [2024-07-26 06:20:39.193598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.059 [2024-07-26 06:20:39.193641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 06:20:39.193660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.059 with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:1[2024-07-26 06:20:39.193698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.059 with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.059 [2024-07-26 06:20:39.193736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.059 [2024-07-26 06:20:39.193754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 06:20:39.193771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.059 with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.059 [2024-07-26 06:20:39.193808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.059 [2024-07-26 06:20:39.193826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:1[2024-07-26 06:20:39.193845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.059 with the state(5) to be set 00:28:28.059 [2024-07-26 06:20:39.193866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same [2024-07-26 06:20:39.193866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:28:28.059 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.059 [2024-07-26 06:20:39.193892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.059 [2024-07-26 06:20:39.193913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.059 [2024-07-26 06:20:39.193937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.059 [2024-07-26 06:20:39.193959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.059 [2024-07-26 06:20:39.193983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.059 [2024-07-26 06:20:39.194004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.059 [2024-07-26 06:20:39.194028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.059 [2024-07-26 06:20:39.194065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.059 [2024-07-26 06:20:39.194097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.059 [2024-07-26 06:20:39.194120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.059 [2024-07-26 06:20:39.194145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.059 [2024-07-26 06:20:39.194166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.059 [2024-07-26 06:20:39.194191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.059 [2024-07-26 06:20:39.194212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.059 [2024-07-26 06:20:39.194237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.059 [2024-07-26 06:20:39.194259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.059 [2024-07-26 06:20:39.194284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.059 [2024-07-26 06:20:39.194305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.059 [2024-07-26 06:20:39.194329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.059 [2024-07-26 06:20:39.194359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.059 [2024-07-26 06:20:39.194384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.059 [2024-07-26 06:20:39.194405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.059 [2024-07-26 06:20:39.194429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.059 [2024-07-26 06:20:39.194467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.059 [2024-07-26 06:20:39.194495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.059 [2024-07-26 06:20:39.194517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.059 [2024-07-26 06:20:39.194541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.194562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.194585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.194607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.194631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.194652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.194676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.194702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.194727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.194749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.194773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.194794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.194817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.194838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.194863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.194885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.194910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.194931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.194955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.194977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.195001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.195023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.195057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.195087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.195112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.195134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.195159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.195180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.195204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.195226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.195250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.195271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.195300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.195322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.195355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.195376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.195400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.195421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.195445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.195466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.195490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.195511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.195535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.195556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.195581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.195603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.195626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.195648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.195672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.195693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.195717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.195739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.195763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.195784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.195808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.195829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.195853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.195878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.195904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.195926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.195950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.195971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.060 [2024-07-26 06:20:39.195995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.060 [2024-07-26 06:20:39.196016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.061 [2024-07-26 06:20:39.196040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.061 [2024-07-26 06:20:39.196067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.061 [2024-07-26 06:20:39.196094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.061 [2024-07-26 06:20:39.196116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.061 [2024-07-26 06:20:39.196139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.061 [2024-07-26 06:20:39.196160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.061 [2024-07-26 06:20:39.196184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.061 [2024-07-26 06:20:39.196205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.061 [2024-07-26 06:20:39.196229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.061 [2024-07-26 06:20:39.196250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.061 [2024-07-26 06:20:39.196274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.061 [2024-07-26 06:20:39.196295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.061 [2024-07-26 06:20:39.196318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.061 [2024-07-26 06:20:39.196340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.061 [2024-07-26 06:20:39.196364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.061 [2024-07-26 06:20:39.196386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.061 [2024-07-26 06:20:39.196409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.061 [2024-07-26 06:20:39.196430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.061 [2024-07-26 06:20:39.196457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.061 [2024-07-26 06:20:39.196478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.061 [2024-07-26 06:20:39.196501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.061 [2024-07-26 06:20:39.196522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.061 [2024-07-26 06:20:39.196545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.061 [2024-07-26 06:20:39.196566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.061 [2024-07-26 06:20:39.196637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:28.061 [2024-07-26 06:20:39.197263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.197982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.198000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.200263] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f9d00 was disconnected and freed. reset controller. 00:28:28.061 [2024-07-26 06:20:39.200471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.061 [2024-07-26 06:20:39.200504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.061 [2024-07-26 06:20:39.200529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.061 [2024-07-26 06:20:39.200553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.061 [2024-07-26 06:20:39.200575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.061 [2024-07-26 06:20:39.200595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.061 [2024-07-26 06:20:39.200615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.061 [2024-07-26 06:20:39.200634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.061 [2024-07-26 06:20:39.200653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6880 is same with the state(5) to be set 00:28:28.061 [2024-07-26 06:20:39.200712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.061 [2024-07-26 06:20:39.200738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.061 [2024-07-26 06:20:39.200761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.200780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.200787] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:28.062 [2024-07-26 06:20:39.200807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.200824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same [2024-07-26 06:20:39.200828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cwith the state(5) to be set 00:28:28.062 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.200847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:28.062 [2024-07-26 06:20:39.200850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.200867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:28.062 [2024-07-26 06:20:39.200871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.200885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:28.062 [2024-07-26 06:20:39.200890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4a80 is same with the state(5) to be set 00:28:28.062 [2024-07-26 06:20:39.200903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:28.062 [2024-07-26 06:20:39.200921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:28:28.062 [2024-07-26 06:20:39.200957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.200984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.201006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.201025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.201050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.201082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.201104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.201124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.201142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5200 is same with the state(5) to be set 00:28:28.062 [2024-07-26 06:20:39.201208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.201235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.201257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.201277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.201298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.201318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.201339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.201358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.201390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(5) to be set 00:28:28.062 [2024-07-26 06:20:39.201458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.201485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.201507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.201527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.201548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.201567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.201587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.201606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.201625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:28:28.062 [2024-07-26 06:20:39.201685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.201711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.201738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.201758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.201779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.201798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.201819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.201838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.201856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:28:28.062 [2024-07-26 06:20:39.201918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.201946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.201968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.201987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.202008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.202026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.202047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.202075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.202096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3400 is same with the state(5) to be set 00:28:28.062 [2024-07-26 06:20:39.202158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.202185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.202207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.202227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.202247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.202267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.202287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.062 [2024-07-26 06:20:39.202306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.202324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3b80 is same with the state(5) to be set 00:28:28.062 [2024-07-26 06:20:39.203147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.062 [2024-07-26 06:20:39.203188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.203196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.062 [2024-07-26 06:20:39.203222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.062 [2024-07-26 06:20:39.203230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.062 [2024-07-26 06:20:39.203245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.062 [2024-07-26 06:20:39.203251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.062 [2024-07-26 06:20:39.203269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.062 [2024-07-26 06:20:39.203269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.062 [2024-07-26 06:20:39.203287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.062 [2024-07-26 06:20:39.203293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.063 [2024-07-26 06:20:39.203306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.063 [2024-07-26 06:20:39.203326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.063 [2024-07-26 06:20:39.203354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.063 [2024-07-26 06:20:39.203373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 06:20:39.203392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.063 with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.063 [2024-07-26 06:20:39.203429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.063 [2024-07-26 06:20:39.203448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.063 [2024-07-26 06:20:39.203466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.063 [2024-07-26 06:20:39.203490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:1[2024-07-26 06:20:39.203509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.063 with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-26 06:20:39.203531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:28:28.063 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.063 [2024-07-26 06:20:39.203550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.063 [2024-07-26 06:20:39.203568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.063 [2024-07-26 06:20:39.203587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:1[2024-07-26 06:20:39.203605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.063 with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-26 06:20:39.203625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:28:28.063 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.063 [2024-07-26 06:20:39.203644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.063 [2024-07-26 06:20:39.203663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.063 [2024-07-26 06:20:39.203682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.063 [2024-07-26 06:20:39.203701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.063 [2024-07-26 06:20:39.203719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-26 06:20:39.203741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:1with the state(5) to be set 00:28:28.063 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.063 [2024-07-26 06:20:39.203762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-26 06:20:39.203764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:28:28.063 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.063 [2024-07-26 06:20:39.203783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.063 [2024-07-26 06:20:39.203802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.063 [2024-07-26 06:20:39.203820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:1[2024-07-26 06:20:39.203838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.063 with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-26 06:20:39.203859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:28:28.063 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.063 [2024-07-26 06:20:39.203879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.063 [2024-07-26 06:20:39.203898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.063 [2024-07-26 06:20:39.203916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:1[2024-07-26 06:20:39.203935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.063 with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-26 06:20:39.203955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:28:28.063 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.063 [2024-07-26 06:20:39.203974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.203980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.063 [2024-07-26 06:20:39.203993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.204001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.063 [2024-07-26 06:20:39.204011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.204026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.063 [2024-07-26 06:20:39.204033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.204047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.063 [2024-07-26 06:20:39.204052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.204079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-26 06:20:39.204079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:1with the state(5) to be set 00:28:28.063 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.063 [2024-07-26 06:20:39.204100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.204103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.063 [2024-07-26 06:20:39.204120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.204127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.063 [2024-07-26 06:20:39.204139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.063 [2024-07-26 06:20:39.204149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.204158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.064 [2024-07-26 06:20:39.204173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:1[2024-07-26 06:20:39.204176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 with the state(5) to be set 00:28:28.064 [2024-07-26 06:20:39.204196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 06:20:39.204196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 with the state(5) to be set 00:28:28.064 [2024-07-26 06:20:39.204218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.064 [2024-07-26 06:20:39.204222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.204236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.064 [2024-07-26 06:20:39.204244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.204255] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.064 [2024-07-26 06:20:39.204268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.204273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.064 [2024-07-26 06:20:39.204289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 06:20:39.204292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 with the state(5) to be set 00:28:28.064 [2024-07-26 06:20:39.204316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.064 [2024-07-26 06:20:39.204319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.204334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.064 [2024-07-26 06:20:39.204340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.204353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.064 [2024-07-26 06:20:39.204364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.204371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.064 [2024-07-26 06:20:39.204386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.204390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.064 [2024-07-26 06:20:39.204408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.064 [2024-07-26 06:20:39.204409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.204426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:28:28.064 [2024-07-26 06:20:39.204431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.204455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.204476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.204500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.204521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.204544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.204565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.204589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.204609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.204633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.204654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.204678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.204699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.204727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.204749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.204774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.204794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.204818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.204839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.204862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.204883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.204907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.204928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.204951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.204972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.204996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.205017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.205049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.205080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.205106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.205127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.205152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.205173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.205197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.205217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.205242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.205263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.205287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.205325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.205357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.205379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.205403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.205424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.205448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.205469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.205493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.205514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.205539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.205566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.205590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.205610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.064 [2024-07-26 06:20:39.205635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.064 [2024-07-26 06:20:39.205656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.065 [2024-07-26 06:20:39.205679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.065 [2024-07-26 06:20:39.205700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.065 [2024-07-26 06:20:39.205723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.065 [2024-07-26 06:20:39.205744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.065 [2024-07-26 06:20:39.205768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.065 [2024-07-26 06:20:39.205789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.065 [2024-07-26 06:20:39.205812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.065 [2024-07-26 06:20:39.205833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 06:20:39.205823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.065 with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.205861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:12[2024-07-26 06:20:39.205862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.065 with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.205888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same [2024-07-26 06:20:39.205888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:28:28.065 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.065 [2024-07-26 06:20:39.205909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.205915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.065 [2024-07-26 06:20:39.205928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.205937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.065 [2024-07-26 06:20:39.205946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.205962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:12[2024-07-26 06:20:39.205965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.065 with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.205986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same [2024-07-26 06:20:39.205986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:28:28.065 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.065 [2024-07-26 06:20:39.206005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.065 [2024-07-26 06:20:39.206023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.065 [2024-07-26 06:20:39.206052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.065 [2024-07-26 06:20:39.206098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.065 [2024-07-26 06:20:39.206117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:12[2024-07-26 06:20:39.206135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.065 with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same [2024-07-26 06:20:39.206156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:28:28.065 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.065 [2024-07-26 06:20:39.206181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.065 [2024-07-26 06:20:39.206199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.065 [2024-07-26 06:20:39.206217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such devi[2024-07-26 06:20:39.206271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same ce or address) on qpair id 1 00:28:28.065 with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206550] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f9080 was disconnected and freed. reset controller. 00:28:28.065 [2024-07-26 06:20:39.206560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.065 [2024-07-26 06:20:39.206675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.066 [2024-07-26 06:20:39.206693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.066 [2024-07-26 06:20:39.206710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.066 [2024-07-26 06:20:39.206728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.066 [2024-07-26 06:20:39.206745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.066 [2024-07-26 06:20:39.206763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.066 [2024-07-26 06:20:39.206780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.066 [2024-07-26 06:20:39.206800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.066 [2024-07-26 06:20:39.206819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.066 [2024-07-26 06:20:39.206837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.066 [2024-07-26 06:20:39.206854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.066 [2024-07-26 06:20:39.206872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.066 [2024-07-26 06:20:39.206890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.066 [2024-07-26 06:20:39.206907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.066 [2024-07-26 06:20:39.206925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.066 [2024-07-26 06:20:39.206942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.066 [2024-07-26 06:20:39.206960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.066 [2024-07-26 06:20:39.206978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.066 [2024-07-26 06:20:39.206995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.066 [2024-07-26 06:20:39.207013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:28:28.066 [2024-07-26 06:20:39.207316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.207359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.207392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.207415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.207439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.207461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.207485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.207507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.207530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.207551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.207575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.207596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.207620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.207641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.207666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.207687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.207711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.207732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.207756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.207778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.207801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.207822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.207846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.207867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.207890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.207911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.207939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.207961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.207984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.208005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.208029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.208055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.208089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.208110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.208134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.208155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.208178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.208199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.208223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.208244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.208267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.208288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.208312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.208332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.208355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.208376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.208399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.208420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.208444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.208465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.208488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.208513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.208538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.208559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.208582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.208603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.208626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.066 [2024-07-26 06:20:39.208646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.066 [2024-07-26 06:20:39.208670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.208691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.208715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.208736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.208759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.208780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.208804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.208824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.208847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.208867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.208890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.208911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.208934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.208955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.208978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.208999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.209023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.209053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.209088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.209110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.209134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.209155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.209178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.209199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.209222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.209243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.209267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.209288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.209312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.209332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.209355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.209390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.209416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.209438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.209461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.209482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.209505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.209526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.209550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.209570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.209593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.209614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.209638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.209663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.209687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.209708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.209731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.209753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.209777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.209798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.209822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.209860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.209887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.209910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.209936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.209959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.209985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.210008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.210034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.210057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.210108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.210129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.210153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.210174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.210198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.210219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.210242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.210263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.210290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.067 [2024-07-26 06:20:39.210312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.210372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:28.067 [2024-07-26 06:20:39.210651] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f9580 was disconnected and freed. reset controller. 00:28:28.067 [2024-07-26 06:20:39.212153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.067 [2024-07-26 06:20:39.212185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.212208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.067 [2024-07-26 06:20:39.212228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.212249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.067 [2024-07-26 06:20:39.212269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.067 [2024-07-26 06:20:39.212290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.067 [2024-07-26 06:20:39.212310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.068 [2024-07-26 06:20:39.212330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(5) to be set 00:28:28.068 [2024-07-26 06:20:39.212383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6880 (9): Bad file descriptor 00:28:28.068 [2024-07-26 06:20:39.212427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4a80 (9): Bad file descriptor 00:28:28.068 [2024-07-26 06:20:39.212471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5200 (9): Bad file descriptor 00:28:28.068 [2024-07-26 06:20:39.212507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:28:28.068 [2024-07-26 06:20:39.212543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:28:28.068 [2024-07-26 06:20:39.212583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2c80 (9): Bad file descriptor 00:28:28.068 [2024-07-26 06:20:39.212625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3400 (9): Bad file descriptor 00:28:28.068 [2024-07-26 06:20:39.212670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3b80 (9): Bad file descriptor 00:28:28.068 [2024-07-26 06:20:39.212742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.068 [2024-07-26 06:20:39.212770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.068 [2024-07-26 06:20:39.212792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.068 [2024-07-26 06:20:39.212812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.068 [2024-07-26 06:20:39.212833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.068 [2024-07-26 06:20:39.212858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.068 [2024-07-26 06:20:39.212880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.068 [2024-07-26 06:20:39.212900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.068 [2024-07-26 06:20:39.212919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5980 is same with the state(5) to be set 00:28:28.068 [2024-07-26 06:20:39.215908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:28.068 [2024-07-26 06:20:39.215968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:28.068 [2024-07-26 06:20:39.217132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:28.068 [2024-07-26 06:20:39.217365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.068 [2024-07-26 06:20:39.217409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6880 with addr=10.0.0.2, port=4420 00:28:28.068 [2024-07-26 06:20:39.217444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6880 is same with the state(5) to be set 00:28:28.068 [2024-07-26 06:20:39.217584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.068 [2024-07-26 06:20:39.217619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:28:28.068 [2024-07-26 06:20:39.217643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(5) to be set 00:28:28.068 [2024-07-26 06:20:39.219455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.068 [2024-07-26 06:20:39.219495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5200 with addr=10.0.0.2, port=4420 00:28:28.068 [2024-07-26 06:20:39.219519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5200 is same with the state(5) to be set 00:28:28.068 [2024-07-26 06:20:39.219552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6880 (9): Bad file descriptor 00:28:28.068 [2024-07-26 06:20:39.219586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:28:28.068 [2024-07-26 06:20:39.219667] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:28.068 [2024-07-26 06:20:39.219766] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:28.068 [2024-07-26 06:20:39.219855] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:28.068 [2024-07-26 06:20:39.219939] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:28.068 [2024-07-26 06:20:39.220155] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:28.068 [2024-07-26 06:20:39.220229] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:28.068 [2024-07-26 06:20:39.220297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5200 (9): Bad file descriptor 00:28:28.068 [2024-07-26 06:20:39.220330] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:28.068 [2024-07-26 06:20:39.220352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:28.068 [2024-07-26 06:20:39.220376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:28.068 [2024-07-26 06:20:39.220415] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:28.068 [2024-07-26 06:20:39.220436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:28.068 [2024-07-26 06:20:39.220463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:28.068 [2024-07-26 06:20:39.220657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.068 [2024-07-26 06:20:39.220698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.068 [2024-07-26 06:20:39.220757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.068 [2024-07-26 06:20:39.220785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.068 [2024-07-26 06:20:39.220812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.068 [2024-07-26 06:20:39.220835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.068 [2024-07-26 06:20:39.220859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.068 [2024-07-26 06:20:39.220880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.068 [2024-07-26 06:20:39.220905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.068 [2024-07-26 06:20:39.220926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.068 [2024-07-26 06:20:39.220950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.068 [2024-07-26 06:20:39.220972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.068 [2024-07-26 06:20:39.220996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.068 [2024-07-26 06:20:39.221017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.068 [2024-07-26 06:20:39.221042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.068 [2024-07-26 06:20:39.221081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.068 [2024-07-26 06:20:39.221107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.068 [2024-07-26 06:20:39.221129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.068 [2024-07-26 06:20:39.221153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.068 [2024-07-26 06:20:39.221174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.068 [2024-07-26 06:20:39.221199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.068 [2024-07-26 06:20:39.221220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.068 [2024-07-26 06:20:39.221244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.068 [2024-07-26 06:20:39.221265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.068 [2024-07-26 06:20:39.221294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.068 [2024-07-26 06:20:39.221317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.068 [2024-07-26 06:20:39.221341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.068 [2024-07-26 06:20:39.221363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.068 [2024-07-26 06:20:39.221386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.068 [2024-07-26 06:20:39.221407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.068 [2024-07-26 06:20:39.221431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.068 [2024-07-26 06:20:39.221452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.068 [2024-07-26 06:20:39.221476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.068 [2024-07-26 06:20:39.221498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.068 [2024-07-26 06:20:39.221537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.068 [2024-07-26 06:20:39.221560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.068 [2024-07-26 06:20:39.221584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.068 [2024-07-26 06:20:39.221606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.221629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.221650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.221675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.221696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.221719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.221740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.221765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.221786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.221810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.221831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.221855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.221880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.221905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.221926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.221950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.221971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.221995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.222016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.222040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.222067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.222094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.222115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.222140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.222161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.222185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.222206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.222230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.222251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.222275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.222296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.222320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.222340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.222364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.222385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.222410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.222431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.222454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.222479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.222504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.222526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.222549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.222571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.222595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.222616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.222641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.222662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.222686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.222707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.222732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.222753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.222777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.222798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.222822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.222843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.222867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.222888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.222912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.069 [2024-07-26 06:20:39.222934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.069 [2024-07-26 06:20:39.222958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.222979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.223003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.223024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.223052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.223082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.223107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.223128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.223153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.223173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.223198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.223221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.223246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.223267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.223291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.223311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.223335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.223356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.223380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.223401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.223425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.223446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.223471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.223492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.223516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.223537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.223561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.223583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.223607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.223633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.223658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.223679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.223701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9300 is same with the state(5) to be set 00:28:28.070 [2024-07-26 06:20:39.224008] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f9300 was disconnected and freed. reset controller. 00:28:28.070 [2024-07-26 06:20:39.224140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.070 [2024-07-26 06:20:39.224171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.070 [2024-07-26 06:20:39.224206] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:28.070 [2024-07-26 06:20:39.224227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:28.070 [2024-07-26 06:20:39.224248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:28.070 [2024-07-26 06:20:39.224343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:28:28.070 [2024-07-26 06:20:39.224398] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:28.070 [2024-07-26 06:20:39.224480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5980 (9): Bad file descriptor 00:28:28.070 [2024-07-26 06:20:39.225822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.070 [2024-07-26 06:20:39.225880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:28.070 [2024-07-26 06:20:39.225997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.226028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.226070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.226095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.226120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.226141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.226165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.226187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.226211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.226232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.226256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.226277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.226307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.226329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.226352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.226373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.226397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.226418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.226442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.226463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.226488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.226509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.226591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.226613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.226638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.226659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.226683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.226704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.226727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.226748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.226772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.226793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.070 [2024-07-26 06:20:39.226816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.070 [2024-07-26 06:20:39.226837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.226861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.226882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.226905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.226930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.226955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.226977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.227000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.227022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.227045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.227078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.227104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.227126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.227149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.227170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.227193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.227214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.227238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.227259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.227282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.227303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.227327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.227348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.227371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.227392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.227416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.227436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.227460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.227481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.227509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.227531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.227555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.227576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.227599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.227619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.227643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.227664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.227687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.227708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.227732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.227752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.227776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.227797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.227821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.227842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.227866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.227887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.227910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.227931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.227954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.227974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.227998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.228018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.228041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.228076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.228103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.228124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.228148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.228169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.228193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.228213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.228236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.228257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.228280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.228301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.228325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.228345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.228369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.228389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.228413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.228434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.228457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.228478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.228502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.228522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.228545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.228566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.228590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.071 [2024-07-26 06:20:39.228610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.071 [2024-07-26 06:20:39.228637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.228659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.228683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.228703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.228727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.228748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.228771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.228792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.228815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.228836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.228861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.228881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.228904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.228925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.228949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.228969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.228991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8680 is same with the state(5) to be set 00:28:28.072 [2024-07-26 06:20:39.230587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.230618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.230664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.230687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.230711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.230732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.230757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.230778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.230806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.230828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.230851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.230872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.230896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.230916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.230940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.230961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.230984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.231005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.231028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.231049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.231088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.231122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.231148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.231169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.231193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.231213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.231237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.231258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.231281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.231301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.231324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.231345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.231368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.231392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.231416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.231438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.231461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.231482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.231505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.231526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.231549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.231569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.231593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.231614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.231637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.231658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.231681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.231702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.231725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.231746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.231769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.231789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.231813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.231833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.231856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.231877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.231900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.231920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.231948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.072 [2024-07-26 06:20:39.231969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.072 [2024-07-26 06:20:39.231993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.232013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.232036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.232057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.232090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.232111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.232134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.232155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.232179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.232200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.232223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.232244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.232267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.232287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.232311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.232331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.232354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.232375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.232398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.232419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.232442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.232462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.232485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.232510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.232534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.232555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.232578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.232599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.232622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.232643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.232665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.232686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.232709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.232729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.232753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.232774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.232797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.232818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.232841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.232862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.232885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.232906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.232929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.232949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.232973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.232993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.233017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.233037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.233071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.233094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.233118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.233138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.233162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.233182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.233206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.233227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.233250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.233271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.233294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.233315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.233339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.233359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.233383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.233403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.233426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.233447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.233470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.233491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.233512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8900 is same with the state(5) to be set 00:28:28.073 [2024-07-26 06:20:39.235045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.235104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.235136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.235159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.235193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.073 [2024-07-26 06:20:39.235215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.073 [2024-07-26 06:20:39.235239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.235260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.235284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.235305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.235329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.235350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.235374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.235395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.235418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.235439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.235463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.235484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.235508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.235528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.235567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.235589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.235612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.235634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.235657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.235678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.235702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.235723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.235746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.235770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.235801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.235822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.235845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.235866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.235891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.235911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.235935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.235956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.235979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.236000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.236024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.236045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.236077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.236099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.236123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.236144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.236168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.236188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.236211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.236232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.236255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.236276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.236299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.236320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.236347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.236369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.236393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.236414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.236437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.236457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.236481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.236501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.236525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.236546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.236570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.236590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.236614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.236635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.236658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.236679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.236702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.236723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.074 [2024-07-26 06:20:39.236747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.074 [2024-07-26 06:20:39.236768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.236791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.236812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.236835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.236856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.236880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.236904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.236929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.236950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.236973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.236994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.237017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.237038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.237068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.237091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.237116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.237137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.237161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.237181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.237206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.237227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.237250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.237271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.237295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.237317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.237340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.237362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.237386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.237407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.237430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.237452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.237480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.237502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.237525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.237546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.237570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.237591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.237615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.237635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.237659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.237679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.237703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.237723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.237747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.237767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.237791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.237812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.237835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.237856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.237879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.237900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.237923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.237943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.237966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.237987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.238008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8b80 is same with the state(5) to be set 00:28:28.075 [2024-07-26 06:20:39.239547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.239596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.239629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.239652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.239676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.239697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.239721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.239742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.239765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.239786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.239810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.239831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.239854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.239875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.239898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.239920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.239943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.239964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.239988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.240021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.075 [2024-07-26 06:20:39.240047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.075 [2024-07-26 06:20:39.240077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.240103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.240124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.240148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.240168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.240197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.240218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.240242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.240262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.240286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.240307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.240331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.240352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.240375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.240395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.240419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.240440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.240463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.240483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.240506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.240527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.240550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.240571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.240594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.240615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.240638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.240659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.240682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.240702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.240726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.240751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.240776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.240796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.240820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.240840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.240863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.240883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.240907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.240928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.240951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.240971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.240995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.241015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.241039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.241066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.241091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.241112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.241136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.241157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.241180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.241200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.241223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.241244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.241268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.241288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.241316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.241337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.241361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.241381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.241405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.241425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.241449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.241469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.241493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.241513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.241536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.241557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.241581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.241601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.241625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.076 [2024-07-26 06:20:39.241645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.076 [2024-07-26 06:20:39.241669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.241689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.241712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.241733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.241757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.241777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.241800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.241821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.241845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.241869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.241894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.241915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.241938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.241958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.241982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.242002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.242025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.242046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.242076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.242099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.242123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.242143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.242166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.242187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.242210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.242230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.242253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.242273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.242297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.242317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.242340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.242361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.242384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.242403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.242431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.242452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.242473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8e00 is same with the state(5) to be set 00:28:28.077 [2024-07-26 06:20:39.244200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.077 [2024-07-26 06:20:39.244248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:28.077 [2024-07-26 06:20:39.244277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:28.077 [2024-07-26 06:20:39.244613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.077 [2024-07-26 06:20:39.244653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4a80 with addr=10.0.0.2, port=4420 00:28:28.077 [2024-07-26 06:20:39.244678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4a80 is same with the state(5) to be set 00:28:28.077 [2024-07-26 06:20:39.244774] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:28.077 [2024-07-26 06:20:39.244824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4a80 (9): Bad file descriptor 00:28:28.077 [2024-07-26 06:20:39.245501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:28.077 [2024-07-26 06:20:39.245722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.077 [2024-07-26 06:20:39.245758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:28:28.077 [2024-07-26 06:20:39.245781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:28:28.077 [2024-07-26 06:20:39.246010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.077 [2024-07-26 06:20:39.246044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2c80 with addr=10.0.0.2, port=4420 00:28:28.077 [2024-07-26 06:20:39.246075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:28:28.077 [2024-07-26 06:20:39.246227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.077 [2024-07-26 06:20:39.246261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3400 with addr=10.0.0.2, port=4420 00:28:28.077 [2024-07-26 06:20:39.246283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3400 is same with the state(5) to be set 00:28:28.077 [2024-07-26 06:20:39.248217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.248249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.248280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.248304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.248328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.248349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.248373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.248399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.248424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.248446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.248468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.248489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.248511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.248532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.248555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.248576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.248598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.248619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.248642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.248662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.248685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.248705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.248728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.248749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.077 [2024-07-26 06:20:39.248771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-26 06:20:39.248792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.248815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.248835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.248859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.248879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.248902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.248922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.248950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.248972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.248995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.249015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.249038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.249069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.249095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.249116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.249140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.249160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.249183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.249204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.249227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.249247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.249270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.249290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.249313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.249333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.249356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.249377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.249400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.249420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.249443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.249463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.249486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.249511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.249535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.249556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.249578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.249599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.249622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.249642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.249665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.249685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.249708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.249729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.249752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.249772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.249795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.249815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.249838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.249859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.249882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.249902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.249925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.249945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.249968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.249989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.250012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.250032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.250067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.250090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.250113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.250134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.250157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.250178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.250201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.250222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.250244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.250265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.250288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.250309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.250332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.250352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.250375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.250396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.250419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.250440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.250463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.250483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.250506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-26 06:20:39.250526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.078 [2024-07-26 06:20:39.250549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.250571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.250594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.250620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.250645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.250665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.250689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.250709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.250733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.250753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.250777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.250797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.250820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.250841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.250864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.250884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.250907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.250927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.250950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.250971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.250993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.251014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.251037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.251063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.251100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9800 is same with the state(5) to be set 00:28:28.079 [2024-07-26 06:20:39.252616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.252646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.252692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.252720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.252745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.252766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.252789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.252810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.252833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.252854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.252877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.252897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.252920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.252940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.252963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.252984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.253006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.253027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.253050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.253091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.253117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.253137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.253161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.253181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.253204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.253225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.253248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.253268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.253295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.253317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.253341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.253361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.253384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.253405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.253428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.253449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.253472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.253492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.253516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.253537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.253560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.079 [2024-07-26 06:20:39.253581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.079 [2024-07-26 06:20:39.253604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.253624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.253647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.253667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.253690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.253711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.253734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.253754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.253778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.253798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.253821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.253846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.253870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.253891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.253915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.253935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.253959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.253979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.254002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.254023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.254046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.254076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.254101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.254122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.254145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.254166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.254189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.254210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.254233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.254254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.254277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.254298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.254320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.254341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.254365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.254385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.254412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.254433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.254456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.254477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.254500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.254521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.254543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.254564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.254588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.254608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.254631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.254652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.254675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.254696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.254719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.254739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.254762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.254783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.254805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.254826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.254849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.254869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.254892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.254913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.254937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.254962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.254986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.255006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.255029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.255049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.255080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.255101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.255125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.255146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.255170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.255191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.255213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.255234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.255256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.255277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.255300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.255321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.080 [2024-07-26 06:20:39.255344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.080 [2024-07-26 06:20:39.255365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.081 [2024-07-26 06:20:39.255388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.081 [2024-07-26 06:20:39.255408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.081 [2024-07-26 06:20:39.255432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.081 [2024-07-26 06:20:39.255452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.081 [2024-07-26 06:20:39.255476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.081 [2024-07-26 06:20:39.255509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.081 [2024-07-26 06:20:39.255536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9a80 is same with the state(5) to be set 00:28:28.081 [2024-07-26 06:20:39.260134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:28.081 [2024-07-26 06:20:39.260175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:28.081 [2024-07-26 06:20:39.260199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:28.081 [2024-07-26 06:20:39.260232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:28.081 task offset: 20992 on job bdev=Nvme10n1 fails 00:28:28.081 00:28:28.081 Latency(us) 00:28:28.081 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.081 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.081 Job: Nvme1n1 ended in about 1.00 seconds with error 00:28:28.081 Verification LBA range: start 0x0 length 0x400 00:28:28.081 Nvme1n1 : 1.00 128.40 8.03 64.20 0.00 328658.49 25826.04 304475.40 00:28:28.081 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.081 Job: Nvme2n1 ended in about 1.00 seconds with error 00:28:28.081 Verification LBA range: start 0x0 length 0x400 00:28:28.081 Nvme2n1 : 1.00 127.83 7.99 63.91 0.00 323474.77 21359.88 304475.40 00:28:28.081 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.081 Job: Nvme3n1 ended in about 1.01 seconds with error 00:28:28.081 Verification LBA range: start 0x0 length 0x400 00:28:28.081 Nvme3n1 : 1.01 127.25 7.95 63.63 0.00 318367.42 27185.30 296708.17 00:28:28.081 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.081 Job: Nvme4n1 ended in about 1.01 seconds with error 00:28:28.081 Verification LBA range: start 0x0 length 0x400 00:28:28.081 Nvme4n1 : 1.01 126.69 7.92 63.35 0.00 313140.97 18738.44 338651.21 00:28:28.081 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.081 Job: Nvme5n1 ended in about 0.98 seconds with error 00:28:28.081 Verification LBA range: start 0x0 length 0x400 00:28:28.081 Nvme5n1 : 0.98 130.46 8.15 65.23 0.00 296935.10 10534.31 326223.64 00:28:28.081 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.081 Job: Nvme6n1 ended in about 0.99 seconds with error 00:28:28.081 Verification LBA range: start 0x0 length 0x400 00:28:28.081 Nvme6n1 : 0.99 151.14 9.45 64.49 0.00 263798.17 37476.88 279620.27 00:28:28.081 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.081 Job: Nvme7n1 ended in about 0.98 seconds with error 00:28:28.081 Verification LBA range: start 0x0 length 0x400 00:28:28.081 Nvme7n1 : 0.98 195.45 12.22 65.15 0.00 213048.70 22136.60 299815.06 00:28:28.081 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.081 Job: Nvme8n1 ended in about 1.02 seconds with error 00:28:28.081 Verification LBA range: start 0x0 length 0x400 00:28:28.081 Nvme8n1 : 1.02 125.63 7.85 62.81 0.00 289770.00 21748.24 279620.27 00:28:28.081 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.081 Job: Nvme9n1 ended in about 1.02 seconds with error 00:28:28.081 Verification LBA range: start 0x0 length 0x400 00:28:28.081 Nvme9n1 : 1.02 138.76 8.67 62.54 0.00 265347.04 22039.51 324670.20 00:28:28.081 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.081 Job: Nvme10n1 ended in about 0.98 seconds with error 00:28:28.081 Verification LBA range: start 0x0 length 0x400 00:28:28.081 Nvme10n1 : 0.98 130.78 8.17 65.39 0.00 263318.19 17767.54 337097.77 00:28:28.081 =================================================================================================================== 00:28:28.081 Total : 1382.40 86.40 640.70 0.00 284811.10 10534.31 338651.21 00:28:28.081 [2024-07-26 06:20:39.341888] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:28.081 [2024-07-26 06:20:39.342003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:28.081 [2024-07-26 06:20:39.342493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.081 [2024-07-26 06:20:39.342540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3b80 with addr=10.0.0.2, port=4420 00:28:28.081 [2024-07-26 06:20:39.342569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3b80 is same with the state(5) to be set 00:28:28.081 [2024-07-26 06:20:39.342608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:28:28.081 [2024-07-26 06:20:39.342642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2c80 (9): Bad file descriptor 00:28:28.081 [2024-07-26 06:20:39.342670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3400 (9): Bad file descriptor 00:28:28.081 [2024-07-26 06:20:39.342696] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:28.081 [2024-07-26 06:20:39.342715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:28.081 [2024-07-26 06:20:39.342738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:28.081 [2024-07-26 06:20:39.342836] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:28.081 [2024-07-26 06:20:39.342874] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:28.081 [2024-07-26 06:20:39.342902] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:28.081 [2024-07-26 06:20:39.342928] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:28.081 [2024-07-26 06:20:39.342957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3b80 (9): Bad file descriptor 00:28:28.081 [2024-07-26 06:20:39.343287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.081 [2024-07-26 06:20:39.343474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.081 [2024-07-26 06:20:39.343511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:28:28.081 [2024-07-26 06:20:39.343535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(5) to be set 00:28:28.081 [2024-07-26 06:20:39.343770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.081 [2024-07-26 06:20:39.343804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6880 with addr=10.0.0.2, port=4420 00:28:28.081 [2024-07-26 06:20:39.343826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6880 is same with the state(5) to be set 00:28:28.081 [2024-07-26 06:20:39.343949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.081 [2024-07-26 06:20:39.343982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5200 with addr=10.0.0.2, port=4420 00:28:28.081 [2024-07-26 06:20:39.344004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5200 is same with the state(5) to be set 00:28:28.081 [2024-07-26 06:20:39.344174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.081 [2024-07-26 06:20:39.344208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5980 with addr=10.0.0.2, port=4420 00:28:28.081 [2024-07-26 06:20:39.344231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5980 is same with the state(5) to be set 00:28:28.081 [2024-07-26 06:20:39.344376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.081 [2024-07-26 06:20:39.344417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6100 with addr=10.0.0.2, port=4420 00:28:28.081 [2024-07-26 06:20:39.344440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(5) to be set 00:28:28.081 [2024-07-26 06:20:39.344464] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.081 [2024-07-26 06:20:39.344483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.081 [2024-07-26 06:20:39.344502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.081 [2024-07-26 06:20:39.344530] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:28.081 [2024-07-26 06:20:39.344550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:28.081 [2024-07-26 06:20:39.344569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:28.081 [2024-07-26 06:20:39.344596] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:28.081 [2024-07-26 06:20:39.344615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:28.081 [2024-07-26 06:20:39.344632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:28.081 [2024-07-26 06:20:39.344693] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:28.081 [2024-07-26 06:20:39.344724] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:28.081 [2024-07-26 06:20:39.344751] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:28.081 [2024-07-26 06:20:39.344777] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:28.081 [2024-07-26 06:20:39.345909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.081 [2024-07-26 06:20:39.345940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.081 [2024-07-26 06:20:39.345975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.082 [2024-07-26 06:20:39.346011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:28:28.082 [2024-07-26 06:20:39.346042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6880 (9): Bad file descriptor 00:28:28.082 [2024-07-26 06:20:39.346078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5200 (9): Bad file descriptor 00:28:28.082 [2024-07-26 06:20:39.346108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5980 (9): Bad file descriptor 00:28:28.082 [2024-07-26 06:20:39.346134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:28:28.082 [2024-07-26 06:20:39.346157] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:28.082 [2024-07-26 06:20:39.346175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:28.082 [2024-07-26 06:20:39.346193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:28.082 [2024-07-26 06:20:39.346318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:28.082 [2024-07-26 06:20:39.346350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.082 [2024-07-26 06:20:39.346394] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:28.082 [2024-07-26 06:20:39.346422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:28.082 [2024-07-26 06:20:39.346446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:28.082 [2024-07-26 06:20:39.346474] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:28.082 [2024-07-26 06:20:39.346494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:28.082 [2024-07-26 06:20:39.346512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:28.082 [2024-07-26 06:20:39.346538] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:28.082 [2024-07-26 06:20:39.346557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:28.082 [2024-07-26 06:20:39.346574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:28.082 [2024-07-26 06:20:39.346599] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:28.082 [2024-07-26 06:20:39.346618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:28.082 [2024-07-26 06:20:39.346636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:28.082 [2024-07-26 06:20:39.346661] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:28.082 [2024-07-26 06:20:39.346679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:28.082 [2024-07-26 06:20:39.346697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:28.082 [2024-07-26 06:20:39.346779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.082 [2024-07-26 06:20:39.346805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.082 [2024-07-26 06:20:39.346823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.082 [2024-07-26 06:20:39.346839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.082 [2024-07-26 06:20:39.346855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.082 [2024-07-26 06:20:39.347045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.082 [2024-07-26 06:20:39.347107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4a80 with addr=10.0.0.2, port=4420 00:28:28.082 [2024-07-26 06:20:39.347131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4a80 is same with the state(5) to be set 00:28:28.082 [2024-07-26 06:20:39.347202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4a80 (9): Bad file descriptor 00:28:28.082 [2024-07-26 06:20:39.347266] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:28.082 [2024-07-26 06:20:39.347292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:28.082 [2024-07-26 06:20:39.347311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:28.082 [2024-07-26 06:20:39.347373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.020 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:28:30.922 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:28:30.922 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:28:30.922 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:28:31.856 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 233585 00:28:31.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (233585) - No such process 00:28:31.856 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:28:31.856 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:28:31.856 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:31.856 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:31.856 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:31.856 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:31.856 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:31.856 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:28:31.856 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:31.856 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:28:31.856 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:31.856 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:31.856 rmmod nvme_tcp 00:28:31.856 rmmod nvme_fabrics 00:28:32.114 rmmod nvme_keyring 00:28:32.114 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:32.114 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:28:32.114 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:28:32.114 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:32.114 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:32.114 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:32.114 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:32.114 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:32.114 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:32.114 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.114 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:32.114 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.024 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:34.024 00:28:34.024 real 0m11.542s 00:28:34.024 user 0m33.253s 00:28:34.024 sys 0m2.018s 00:28:34.024 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:34.024 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:34.024 ************************************ 00:28:34.024 END TEST nvmf_shutdown_tc3 00:28:34.024 ************************************ 00:28:34.024 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:28:34.024 00:28:34.024 real 0m42.297s 00:28:34.024 user 2m13.796s 00:28:34.024 sys 0m8.087s 00:28:34.024 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:34.024 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:34.024 ************************************ 00:28:34.024 END TEST nvmf_shutdown 00:28:34.024 ************************************ 00:28:34.024 06:20:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:28:34.024 00:28:34.025 real 17m1.637s 00:28:34.025 user 47m6.196s 00:28:34.025 sys 3m23.524s 00:28:34.025 06:20:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:34.025 06:20:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:34.025 ************************************ 00:28:34.025 END TEST nvmf_target_extra 00:28:34.025 ************************************ 00:28:34.025 06:20:45 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:34.025 06:20:45 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:34.025 06:20:45 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:34.025 06:20:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:34.025 ************************************ 00:28:34.025 START TEST nvmf_host 00:28:34.025 ************************************ 00:28:34.025 06:20:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:34.283 * Looking for test storage... 00:28:34.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:34.283 06:20:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:34.283 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:28:34.283 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:34.283 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:34.283 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:34.283 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:34.283 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:34.283 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:34.283 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:34.283 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:34.283 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:34.283 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:34.283 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:34.283 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:34.283 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:34.283 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:34.283 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:34.283 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:34.283 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:34.283 06:20:45 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:34.283 06:20:45 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:34.283 06:20:45 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:34.283 06:20:45 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.283 06:20:45 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.284 ************************************ 00:28:34.284 START TEST nvmf_multicontroller 00:28:34.284 ************************************ 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:34.284 * Looking for test storage... 00:28:34.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:28:34.284 06:20:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:36.186 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:36.186 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:36.186 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:36.186 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:36.186 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:36.187 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:36.187 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:36.187 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:36.187 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:36.447 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:36.447 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:36.447 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:36.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:36.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:28:36.447 00:28:36.447 --- 10.0.0.2 ping statistics --- 00:28:36.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.447 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:28:36.447 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:36.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:36.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:28:36.447 00:28:36.447 --- 10.0.0.1 ping statistics --- 00:28:36.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.447 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:28:36.447 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:36.447 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:28:36.447 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:36.447 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:36.447 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:36.447 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:36.447 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:36.447 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:36.447 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:36.447 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:36.447 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:36.447 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:36.447 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:36.447 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=236396 00:28:36.447 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:36.447 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 236396 00:28:36.447 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 236396 ']' 00:28:36.447 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.447 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:36.447 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.447 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:36.447 06:20:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:36.447 [2024-07-26 06:20:47.652428] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:36.447 [2024-07-26 06:20:47.652553] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:36.447 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.707 [2024-07-26 06:20:47.790568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:36.967 [2024-07-26 06:20:48.050930] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:36.967 [2024-07-26 06:20:48.051018] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:36.967 [2024-07-26 06:20:48.051054] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:36.967 [2024-07-26 06:20:48.051087] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:36.967 [2024-07-26 06:20:48.051110] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:36.967 [2024-07-26 06:20:48.051260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:36.967 [2024-07-26 06:20:48.051352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.967 [2024-07-26 06:20:48.051361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:37.534 [2024-07-26 06:20:48.681009] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:37.534 Malloc0 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:37.534 [2024-07-26 06:20:48.800185] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:37.534 [2024-07-26 06:20:48.807972] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.534 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:37.793 Malloc1 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=236556 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 236556 /var/tmp/bdevperf.sock 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 236556 ']' 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:37.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:37.793 06:20:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:38.729 06:20:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:38.729 06:20:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:28:38.729 06:20:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:38.729 06:20:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.729 06:20:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:38.999 NVMe0n1 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.999 1 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:38.999 request: 00:28:38.999 { 00:28:38.999 "name": "NVMe0", 00:28:38.999 "trtype": "tcp", 00:28:38.999 "traddr": "10.0.0.2", 00:28:38.999 "adrfam": "ipv4", 00:28:38.999 "trsvcid": "4420", 00:28:38.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:38.999 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:38.999 "hostaddr": "10.0.0.2", 00:28:38.999 "hostsvcid": "60000", 00:28:38.999 "prchk_reftag": false, 00:28:38.999 "prchk_guard": false, 00:28:38.999 "hdgst": false, 00:28:38.999 "ddgst": false, 00:28:38.999 "method": "bdev_nvme_attach_controller", 00:28:38.999 "req_id": 1 00:28:38.999 } 00:28:38.999 Got JSON-RPC error response 00:28:38.999 response: 00:28:38.999 { 00:28:38.999 "code": -114, 00:28:38.999 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:38.999 } 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:38.999 request: 00:28:38.999 { 00:28:38.999 "name": "NVMe0", 00:28:38.999 "trtype": "tcp", 00:28:38.999 "traddr": "10.0.0.2", 00:28:38.999 "adrfam": "ipv4", 00:28:38.999 "trsvcid": "4420", 00:28:38.999 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:38.999 "hostaddr": "10.0.0.2", 00:28:38.999 "hostsvcid": "60000", 00:28:38.999 "prchk_reftag": false, 00:28:38.999 "prchk_guard": false, 00:28:38.999 "hdgst": false, 00:28:38.999 "ddgst": false, 00:28:38.999 "method": "bdev_nvme_attach_controller", 00:28:38.999 "req_id": 1 00:28:38.999 } 00:28:38.999 Got JSON-RPC error response 00:28:38.999 response: 00:28:38.999 { 00:28:38.999 "code": -114, 00:28:38.999 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:38.999 } 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:38.999 request: 00:28:38.999 { 00:28:38.999 "name": "NVMe0", 00:28:38.999 "trtype": "tcp", 00:28:38.999 "traddr": "10.0.0.2", 00:28:38.999 "adrfam": "ipv4", 00:28:38.999 "trsvcid": "4420", 00:28:38.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:38.999 "hostaddr": "10.0.0.2", 00:28:38.999 "hostsvcid": "60000", 00:28:38.999 "prchk_reftag": false, 00:28:38.999 "prchk_guard": false, 00:28:38.999 "hdgst": false, 00:28:38.999 "ddgst": false, 00:28:38.999 "multipath": "disable", 00:28:38.999 "method": "bdev_nvme_attach_controller", 00:28:38.999 "req_id": 1 00:28:38.999 } 00:28:38.999 Got JSON-RPC error response 00:28:38.999 response: 00:28:38.999 { 00:28:38.999 "code": -114, 00:28:38.999 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:38.999 } 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:38.999 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:39.000 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:39.000 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:39.000 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:39.000 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:39.000 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:39.000 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:39.000 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.000 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:39.000 request: 00:28:39.000 { 00:28:39.000 "name": "NVMe0", 00:28:39.000 "trtype": "tcp", 00:28:39.000 "traddr": "10.0.0.2", 00:28:39.000 "adrfam": "ipv4", 00:28:39.000 "trsvcid": "4420", 00:28:39.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:39.000 "hostaddr": "10.0.0.2", 00:28:39.000 "hostsvcid": "60000", 00:28:39.000 "prchk_reftag": false, 00:28:39.000 "prchk_guard": false, 00:28:39.000 "hdgst": false, 00:28:39.000 "ddgst": false, 00:28:39.000 "multipath": "failover", 00:28:39.000 "method": "bdev_nvme_attach_controller", 00:28:39.000 "req_id": 1 00:28:39.000 } 00:28:39.000 Got JSON-RPC error response 00:28:39.000 response: 00:28:39.000 { 00:28:39.000 "code": -114, 00:28:39.000 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:39.000 } 00:28:39.000 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:39.000 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:39.000 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:39.000 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:39.000 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:39.000 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:39.000 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.000 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:39.000 00:28:39.000 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.000 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:39.000 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.000 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:39.000 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.000 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:39.000 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.000 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:39.279 00:28:39.279 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.279 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:39.279 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:39.279 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.279 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:39.279 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.279 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:39.280 06:20:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:40.656 0 00:28:40.656 06:20:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:40.656 06:20:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.656 06:20:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:40.656 06:20:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.656 06:20:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 236556 00:28:40.656 06:20:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 236556 ']' 00:28:40.656 06:20:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 236556 00:28:40.656 06:20:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:28:40.656 06:20:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:40.656 06:20:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 236556 00:28:40.656 06:20:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:40.656 06:20:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:40.656 06:20:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 236556' 00:28:40.656 killing process with pid 236556 00:28:40.656 06:20:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 236556 00:28:40.656 06:20:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 236556 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:28:41.596 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:41.596 [2024-07-26 06:20:48.995698] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:41.596 [2024-07-26 06:20:48.995860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236556 ] 00:28:41.596 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.596 [2024-07-26 06:20:49.121241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.596 [2024-07-26 06:20:49.360748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.596 [2024-07-26 06:20:50.435178] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 0acaff7c-0d3a-49b7-b30f-b3a5a17fc45d already exists 00:28:41.596 [2024-07-26 06:20:50.435240] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:0acaff7c-0d3a-49b7-b30f-b3a5a17fc45d alias for bdev NVMe1n1 00:28:41.596 [2024-07-26 06:20:50.435281] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:41.596 Running I/O for 1 seconds... 00:28:41.596 00:28:41.596 Latency(us) 00:28:41.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.596 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:41.596 NVMe0n1 : 1.01 13320.31 52.03 0.00 0.00 9592.21 6116.69 20486.07 00:28:41.596 =================================================================================================================== 00:28:41.596 Total : 13320.31 52.03 0.00 0.00 9592.21 6116.69 20486.07 00:28:41.596 Received shutdown signal, test time was about 1.000000 seconds 00:28:41.596 00:28:41.596 Latency(us) 00:28:41.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.596 =================================================================================================================== 00:28:41.596 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:41.596 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:28:41.596 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:41.596 rmmod nvme_tcp 00:28:41.596 rmmod nvme_fabrics 00:28:41.596 rmmod nvme_keyring 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 236396 ']' 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 236396 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 236396 ']' 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 236396 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 236396 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 236396' 00:28:41.596 killing process with pid 236396 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 236396 00:28:41.596 06:20:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 236396 00:28:42.971 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:28:42.971 06:20:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:42.971 06:20:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:42.971 06:20:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:42.971 06:20:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:42.971 06:20:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:42.971 06:20:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.971 06:20:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:42.971 06:20:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.511 06:20:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:45.511 00:28:45.511 real 0m10.910s 00:28:45.511 user 0m22.371s 00:28:45.511 sys 0m2.640s 00:28:45.511 06:20:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:45.511 06:20:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.511 ************************************ 00:28:45.511 END TEST nvmf_multicontroller 00:28:45.511 ************************************ 00:28:45.511 06:20:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:45.511 06:20:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:45.511 06:20:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:45.511 06:20:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.511 ************************************ 00:28:45.511 START TEST nvmf_aer 00:28:45.511 ************************************ 00:28:45.511 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:45.511 * Looking for test storage... 00:28:45.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:45.511 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:45.511 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:45.511 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:45.511 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:45.511 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:45.511 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:45.511 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:45.511 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:45.511 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:45.511 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:45.511 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:45.511 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:45.511 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:45.511 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:45.511 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:45.512 06:20:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:47.418 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.418 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:47.419 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:47.419 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:47.419 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:47.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:47.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:28:47.419 00:28:47.419 --- 10.0.0.2 ping statistics --- 00:28:47.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.419 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:47.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:47.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:28:47.419 00:28:47.419 --- 10.0.0.1 ping statistics --- 00:28:47.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.419 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=239085 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 239085 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 239085 ']' 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:47.419 06:20:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:47.419 [2024-07-26 06:20:58.506785] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:47.419 [2024-07-26 06:20:58.506939] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:47.419 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.419 [2024-07-26 06:20:58.663803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:47.678 [2024-07-26 06:20:58.926836] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:47.678 [2024-07-26 06:20:58.926916] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:47.678 [2024-07-26 06:20:58.926953] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:47.678 [2024-07-26 06:20:58.926975] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:47.678 [2024-07-26 06:20:58.927004] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:47.678 [2024-07-26 06:20:58.927132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.678 [2024-07-26 06:20:58.927182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:47.678 [2024-07-26 06:20:58.927219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.678 [2024-07-26 06:20:58.927233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:48.244 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:48.244 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:28:48.244 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:48.244 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:48.244 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.244 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:48.244 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:48.244 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.244 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.244 [2024-07-26 06:20:59.462801] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:48.244 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.244 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:48.244 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.244 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.244 Malloc0 00:28:48.244 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.244 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:48.244 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.245 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.245 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.245 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:48.245 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.245 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.245 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.245 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:48.245 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.245 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.245 [2024-07-26 06:20:59.567482] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:48.245 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.245 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:48.245 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.245 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.503 [ 00:28:48.503 { 00:28:48.503 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:48.503 "subtype": "Discovery", 00:28:48.503 "listen_addresses": [], 00:28:48.503 "allow_any_host": true, 00:28:48.503 "hosts": [] 00:28:48.503 }, 00:28:48.503 { 00:28:48.503 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:48.503 "subtype": "NVMe", 00:28:48.503 "listen_addresses": [ 00:28:48.503 { 00:28:48.503 "trtype": "TCP", 00:28:48.503 "adrfam": "IPv4", 00:28:48.503 "traddr": "10.0.0.2", 00:28:48.503 "trsvcid": "4420" 00:28:48.503 } 00:28:48.503 ], 00:28:48.503 "allow_any_host": true, 00:28:48.503 "hosts": [], 00:28:48.503 "serial_number": "SPDK00000000000001", 00:28:48.503 "model_number": "SPDK bdev Controller", 00:28:48.503 "max_namespaces": 2, 00:28:48.503 "min_cntlid": 1, 00:28:48.503 "max_cntlid": 65519, 00:28:48.503 "namespaces": [ 00:28:48.503 { 00:28:48.503 "nsid": 1, 00:28:48.503 "bdev_name": "Malloc0", 00:28:48.503 "name": "Malloc0", 00:28:48.503 "nguid": "DBD3E6E250344CF9A951329712C8F672", 00:28:48.503 "uuid": "dbd3e6e2-5034-4cf9-a951-329712c8f672" 00:28:48.503 } 00:28:48.503 ] 00:28:48.503 } 00:28:48.503 ] 00:28:48.503 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.503 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:48.503 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:48.503 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=239301 00:28:48.503 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:48.503 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:48.503 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:28:48.503 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:48.503 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:28:48.503 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:28:48.503 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:48.503 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:48.503 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:28:48.503 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:28:48.503 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:48.503 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.503 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:48.503 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:28:48.503 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:28:48.503 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:48.761 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:48.762 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:48.762 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:28:48.762 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:48.762 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.762 06:20:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.762 Malloc1 00:28:48.762 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.762 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:48.762 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.762 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.762 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.762 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:48.762 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.762 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.762 [ 00:28:48.762 { 00:28:48.762 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:48.762 "subtype": "Discovery", 00:28:48.762 "listen_addresses": [], 00:28:48.762 "allow_any_host": true, 00:28:48.762 "hosts": [] 00:28:48.762 }, 00:28:48.762 { 00:28:48.762 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:48.762 "subtype": "NVMe", 00:28:48.762 "listen_addresses": [ 00:28:48.762 { 00:28:48.762 "trtype": "TCP", 00:28:48.762 "adrfam": "IPv4", 00:28:48.762 "traddr": "10.0.0.2", 00:28:48.762 "trsvcid": "4420" 00:28:48.762 } 00:28:48.762 ], 00:28:48.762 "allow_any_host": true, 00:28:48.762 "hosts": [], 00:28:48.762 "serial_number": "SPDK00000000000001", 00:28:48.762 "model_number": "SPDK bdev Controller", 00:28:48.762 "max_namespaces": 2, 00:28:48.762 "min_cntlid": 1, 00:28:48.762 "max_cntlid": 65519, 00:28:48.762 "namespaces": [ 00:28:48.762 { 00:28:48.762 "nsid": 1, 00:28:48.762 "bdev_name": "Malloc0", 00:28:48.762 "name": "Malloc0", 00:28:48.762 "nguid": "DBD3E6E250344CF9A951329712C8F672", 00:28:48.762 "uuid": "dbd3e6e2-5034-4cf9-a951-329712c8f672" 00:28:48.762 }, 00:28:48.762 { 00:28:48.762 "nsid": 2, 00:28:48.762 "bdev_name": "Malloc1", 00:28:48.762 "name": "Malloc1", 00:28:48.762 "nguid": "E78C7C841EAF4BF8996B518FCB0745D4", 00:28:48.762 "uuid": "e78c7c84-1eaf-4bf8-996b-518fcb0745d4" 00:28:48.762 } 00:28:48.762 ] 00:28:48.762 } 00:28:48.762 ] 00:28:48.762 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.762 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 239301 00:28:49.020 Asynchronous Event Request test 00:28:49.020 Attaching to 10.0.0.2 00:28:49.020 Attached to 10.0.0.2 00:28:49.020 Registering asynchronous event callbacks... 00:28:49.020 Starting namespace attribute notice tests for all controllers... 00:28:49.020 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:49.020 aer_cb - Changed Namespace 00:28:49.020 Cleaning up... 00:28:49.020 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:49.020 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.020 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:49.020 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.020 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:49.020 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.020 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:49.278 rmmod nvme_tcp 00:28:49.278 rmmod nvme_fabrics 00:28:49.278 rmmod nvme_keyring 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 239085 ']' 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 239085 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 239085 ']' 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 239085 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 239085 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 239085' 00:28:49.278 killing process with pid 239085 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 239085 00:28:49.278 06:21:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 239085 00:28:50.655 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:28:50.655 06:21:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:50.655 06:21:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:50.655 06:21:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:50.655 06:21:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:50.655 06:21:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:50.655 06:21:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.655 06:21:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:50.655 06:21:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.557 06:21:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:52.557 00:28:52.557 real 0m7.480s 00:28:52.557 user 0m10.764s 00:28:52.557 sys 0m2.120s 00:28:52.557 06:21:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:52.557 06:21:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:52.557 ************************************ 00:28:52.557 END TEST nvmf_aer 00:28:52.557 ************************************ 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.816 ************************************ 00:28:52.816 START TEST nvmf_async_init 00:28:52.816 ************************************ 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:52.816 * Looking for test storage... 00:28:52.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=43276d2b6bdb4d3e9add1f94a101c14d 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:52.816 06:21:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:54.718 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:54.718 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:54.718 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:54.718 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:54.719 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:54.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:54.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:28:54.719 00:28:54.719 --- 10.0.0.2 ping statistics --- 00:28:54.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.719 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:54.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:54.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:28:54.719 00:28:54.719 --- 10.0.0.1 ping statistics --- 00:28:54.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.719 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=241489 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 241489 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 241489 ']' 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:54.719 06:21:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:54.977 [2024-07-26 06:21:06.066349] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:54.977 [2024-07-26 06:21:06.066478] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:54.977 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.977 [2024-07-26 06:21:06.203114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.236 [2024-07-26 06:21:06.458377] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.236 [2024-07-26 06:21:06.458468] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.236 [2024-07-26 06:21:06.458497] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:55.236 [2024-07-26 06:21:06.458524] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:55.236 [2024-07-26 06:21:06.458546] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.236 [2024-07-26 06:21:06.458605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:55.803 [2024-07-26 06:21:07.099548] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:55.803 null0 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 43276d2b6bdb4d3e9add1f94a101c14d 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.803 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:56.062 [2024-07-26 06:21:07.139904] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:56.062 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.062 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:56.062 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.062 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:56.062 nvme0n1 00:28:56.062 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.062 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:56.062 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.062 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:56.062 [ 00:28:56.062 { 00:28:56.062 "name": "nvme0n1", 00:28:56.062 "aliases": [ 00:28:56.062 "43276d2b-6bdb-4d3e-9add-1f94a101c14d" 00:28:56.062 ], 00:28:56.062 "product_name": "NVMe disk", 00:28:56.062 "block_size": 512, 00:28:56.062 "num_blocks": 2097152, 00:28:56.062 "uuid": "43276d2b-6bdb-4d3e-9add-1f94a101c14d", 00:28:56.062 "assigned_rate_limits": { 00:28:56.062 "rw_ios_per_sec": 0, 00:28:56.062 "rw_mbytes_per_sec": 0, 00:28:56.062 "r_mbytes_per_sec": 0, 00:28:56.062 "w_mbytes_per_sec": 0 00:28:56.062 }, 00:28:56.062 "claimed": false, 00:28:56.062 "zoned": false, 00:28:56.062 "supported_io_types": { 00:28:56.062 "read": true, 00:28:56.062 "write": true, 00:28:56.062 "unmap": false, 00:28:56.062 "flush": true, 00:28:56.062 "reset": true, 00:28:56.062 "nvme_admin": true, 00:28:56.062 "nvme_io": true, 00:28:56.062 "nvme_io_md": false, 00:28:56.062 "write_zeroes": true, 00:28:56.062 "zcopy": false, 00:28:56.062 "get_zone_info": false, 00:28:56.062 "zone_management": false, 00:28:56.062 "zone_append": false, 00:28:56.062 "compare": true, 00:28:56.062 "compare_and_write": true, 00:28:56.062 "abort": true, 00:28:56.062 "seek_hole": false, 00:28:56.062 "seek_data": false, 00:28:56.062 "copy": true, 00:28:56.062 "nvme_iov_md": false 00:28:56.062 }, 00:28:56.062 "memory_domains": [ 00:28:56.062 { 00:28:56.062 "dma_device_id": "system", 00:28:56.062 "dma_device_type": 1 00:28:56.062 } 00:28:56.062 ], 00:28:56.062 "driver_specific": { 00:28:56.062 "nvme": [ 00:28:56.062 { 00:28:56.062 "trid": { 00:28:56.062 "trtype": "TCP", 00:28:56.062 "adrfam": "IPv4", 00:28:56.062 "traddr": "10.0.0.2", 00:28:56.062 "trsvcid": "4420", 00:28:56.062 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:56.062 }, 00:28:56.062 "ctrlr_data": { 00:28:56.062 "cntlid": 1, 00:28:56.062 "vendor_id": "0x8086", 00:28:56.062 "model_number": "SPDK bdev Controller", 00:28:56.062 "serial_number": "00000000000000000000", 00:28:56.062 "firmware_revision": "24.09", 00:28:56.062 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:56.062 "oacs": { 00:28:56.062 "security": 0, 00:28:56.062 "format": 0, 00:28:56.062 "firmware": 0, 00:28:56.062 "ns_manage": 0 00:28:56.062 }, 00:28:56.062 "multi_ctrlr": true, 00:28:56.063 "ana_reporting": false 00:28:56.063 }, 00:28:56.063 "vs": { 00:28:56.063 "nvme_version": "1.3" 00:28:56.063 }, 00:28:56.063 "ns_data": { 00:28:56.063 "id": 1, 00:28:56.063 "can_share": true 00:28:56.063 } 00:28:56.063 } 00:28:56.063 ], 00:28:56.063 "mp_policy": "active_passive" 00:28:56.063 } 00:28:56.063 } 00:28:56.063 ] 00:28:56.063 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.063 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:56.063 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.063 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:56.346 [2024-07-26 06:21:07.396449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:56.346 [2024-07-26 06:21:07.396598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:28:56.346 [2024-07-26 06:21:07.529308] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:56.346 [ 00:28:56.346 { 00:28:56.346 "name": "nvme0n1", 00:28:56.346 "aliases": [ 00:28:56.346 "43276d2b-6bdb-4d3e-9add-1f94a101c14d" 00:28:56.346 ], 00:28:56.346 "product_name": "NVMe disk", 00:28:56.346 "block_size": 512, 00:28:56.346 "num_blocks": 2097152, 00:28:56.346 "uuid": "43276d2b-6bdb-4d3e-9add-1f94a101c14d", 00:28:56.346 "assigned_rate_limits": { 00:28:56.346 "rw_ios_per_sec": 0, 00:28:56.346 "rw_mbytes_per_sec": 0, 00:28:56.346 "r_mbytes_per_sec": 0, 00:28:56.346 "w_mbytes_per_sec": 0 00:28:56.346 }, 00:28:56.346 "claimed": false, 00:28:56.346 "zoned": false, 00:28:56.346 "supported_io_types": { 00:28:56.346 "read": true, 00:28:56.346 "write": true, 00:28:56.346 "unmap": false, 00:28:56.346 "flush": true, 00:28:56.346 "reset": true, 00:28:56.346 "nvme_admin": true, 00:28:56.346 "nvme_io": true, 00:28:56.346 "nvme_io_md": false, 00:28:56.346 "write_zeroes": true, 00:28:56.346 "zcopy": false, 00:28:56.346 "get_zone_info": false, 00:28:56.346 "zone_management": false, 00:28:56.346 "zone_append": false, 00:28:56.346 "compare": true, 00:28:56.346 "compare_and_write": true, 00:28:56.346 "abort": true, 00:28:56.346 "seek_hole": false, 00:28:56.346 "seek_data": false, 00:28:56.346 "copy": true, 00:28:56.346 "nvme_iov_md": false 00:28:56.346 }, 00:28:56.346 "memory_domains": [ 00:28:56.346 { 00:28:56.346 "dma_device_id": "system", 00:28:56.346 "dma_device_type": 1 00:28:56.346 } 00:28:56.346 ], 00:28:56.346 "driver_specific": { 00:28:56.346 "nvme": [ 00:28:56.346 { 00:28:56.346 "trid": { 00:28:56.346 "trtype": "TCP", 00:28:56.346 "adrfam": "IPv4", 00:28:56.346 "traddr": "10.0.0.2", 00:28:56.346 "trsvcid": "4420", 00:28:56.346 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:56.346 }, 00:28:56.346 "ctrlr_data": { 00:28:56.346 "cntlid": 2, 00:28:56.346 "vendor_id": "0x8086", 00:28:56.346 "model_number": "SPDK bdev Controller", 00:28:56.346 "serial_number": "00000000000000000000", 00:28:56.346 "firmware_revision": "24.09", 00:28:56.346 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:56.346 "oacs": { 00:28:56.346 "security": 0, 00:28:56.346 "format": 0, 00:28:56.346 "firmware": 0, 00:28:56.346 "ns_manage": 0 00:28:56.346 }, 00:28:56.346 "multi_ctrlr": true, 00:28:56.346 "ana_reporting": false 00:28:56.346 }, 00:28:56.346 "vs": { 00:28:56.346 "nvme_version": "1.3" 00:28:56.346 }, 00:28:56.346 "ns_data": { 00:28:56.346 "id": 1, 00:28:56.346 "can_share": true 00:28:56.346 } 00:28:56.346 } 00:28:56.346 ], 00:28:56.346 "mp_policy": "active_passive" 00:28:56.346 } 00:28:56.346 } 00:28:56.346 ] 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.PZUPOTFKHD 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.PZUPOTFKHD 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:56.346 [2024-07-26 06:21:07.581150] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:56.346 [2024-07-26 06:21:07.581432] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PZUPOTFKHD 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:56.346 [2024-07-26 06:21:07.589148] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PZUPOTFKHD 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.346 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:56.346 [2024-07-26 06:21:07.597153] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:56.346 [2024-07-26 06:21:07.597293] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:56.605 nvme0n1 00:28:56.605 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.605 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:56.605 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.605 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:56.605 [ 00:28:56.605 { 00:28:56.605 "name": "nvme0n1", 00:28:56.605 "aliases": [ 00:28:56.605 "43276d2b-6bdb-4d3e-9add-1f94a101c14d" 00:28:56.605 ], 00:28:56.605 "product_name": "NVMe disk", 00:28:56.605 "block_size": 512, 00:28:56.605 "num_blocks": 2097152, 00:28:56.605 "uuid": "43276d2b-6bdb-4d3e-9add-1f94a101c14d", 00:28:56.605 "assigned_rate_limits": { 00:28:56.605 "rw_ios_per_sec": 0, 00:28:56.605 "rw_mbytes_per_sec": 0, 00:28:56.605 "r_mbytes_per_sec": 0, 00:28:56.605 "w_mbytes_per_sec": 0 00:28:56.605 }, 00:28:56.605 "claimed": false, 00:28:56.605 "zoned": false, 00:28:56.605 "supported_io_types": { 00:28:56.605 "read": true, 00:28:56.605 "write": true, 00:28:56.605 "unmap": false, 00:28:56.605 "flush": true, 00:28:56.605 "reset": true, 00:28:56.605 "nvme_admin": true, 00:28:56.605 "nvme_io": true, 00:28:56.605 "nvme_io_md": false, 00:28:56.605 "write_zeroes": true, 00:28:56.605 "zcopy": false, 00:28:56.605 "get_zone_info": false, 00:28:56.605 "zone_management": false, 00:28:56.605 "zone_append": false, 00:28:56.605 "compare": true, 00:28:56.605 "compare_and_write": true, 00:28:56.605 "abort": true, 00:28:56.605 "seek_hole": false, 00:28:56.605 "seek_data": false, 00:28:56.605 "copy": true, 00:28:56.605 "nvme_iov_md": false 00:28:56.605 }, 00:28:56.605 "memory_domains": [ 00:28:56.605 { 00:28:56.605 "dma_device_id": "system", 00:28:56.605 "dma_device_type": 1 00:28:56.605 } 00:28:56.605 ], 00:28:56.605 "driver_specific": { 00:28:56.605 "nvme": [ 00:28:56.605 { 00:28:56.605 "trid": { 00:28:56.605 "trtype": "TCP", 00:28:56.605 "adrfam": "IPv4", 00:28:56.605 "traddr": "10.0.0.2", 00:28:56.605 "trsvcid": "4421", 00:28:56.605 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:56.605 }, 00:28:56.605 "ctrlr_data": { 00:28:56.605 "cntlid": 3, 00:28:56.605 "vendor_id": "0x8086", 00:28:56.605 "model_number": "SPDK bdev Controller", 00:28:56.605 "serial_number": "00000000000000000000", 00:28:56.605 "firmware_revision": "24.09", 00:28:56.605 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:56.605 "oacs": { 00:28:56.605 "security": 0, 00:28:56.605 "format": 0, 00:28:56.605 "firmware": 0, 00:28:56.605 "ns_manage": 0 00:28:56.605 }, 00:28:56.605 "multi_ctrlr": true, 00:28:56.605 "ana_reporting": false 00:28:56.605 }, 00:28:56.605 "vs": { 00:28:56.605 "nvme_version": "1.3" 00:28:56.605 }, 00:28:56.605 "ns_data": { 00:28:56.605 "id": 1, 00:28:56.605 "can_share": true 00:28:56.605 } 00:28:56.605 } 00:28:56.605 ], 00:28:56.605 "mp_policy": "active_passive" 00:28:56.605 } 00:28:56.605 } 00:28:56.605 ] 00:28:56.605 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.605 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.605 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.605 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:56.605 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.605 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.PZUPOTFKHD 00:28:56.605 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:56.605 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:56.605 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:56.605 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:56.605 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:56.605 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:56.605 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:56.605 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:56.605 rmmod nvme_tcp 00:28:56.605 rmmod nvme_fabrics 00:28:56.605 rmmod nvme_keyring 00:28:56.605 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:56.605 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:56.605 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:56.605 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 241489 ']' 00:28:56.605 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 241489 00:28:56.605 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 241489 ']' 00:28:56.605 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 241489 00:28:56.606 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:28:56.606 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:56.606 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 241489 00:28:56.606 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:56.606 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:56.606 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 241489' 00:28:56.606 killing process with pid 241489 00:28:56.606 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 241489 00:28:56.606 [2024-07-26 06:21:07.783943] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:56.606 [2024-07-26 06:21:07.783999] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:56.606 06:21:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 241489 00:28:57.978 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:28:57.978 06:21:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:57.978 06:21:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:57.978 06:21:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:57.978 06:21:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:57.978 06:21:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:57.978 06:21:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.978 06:21:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:57.978 06:21:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.880 06:21:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:59.880 00:28:59.880 real 0m7.188s 00:28:59.880 user 0m3.953s 00:28:59.880 sys 0m1.929s 00:28:59.880 06:21:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:59.880 06:21:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.880 ************************************ 00:28:59.880 END TEST nvmf_async_init 00:28:59.880 ************************************ 00:28:59.880 06:21:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:59.880 06:21:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:59.880 06:21:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:59.880 06:21:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.880 ************************************ 00:28:59.880 START TEST dma 00:28:59.880 ************************************ 00:28:59.880 06:21:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:59.880 * Looking for test storage... 00:28:59.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:59.880 06:21:11 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:59.880 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:28:59.880 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.880 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.880 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.880 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.880 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.880 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.880 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.880 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.880 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.880 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:00.138 00:29:00.138 real 0m0.064s 00:29:00.138 user 0m0.031s 00:29:00.138 sys 0m0.038s 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:00.138 ************************************ 00:29:00.138 END TEST dma 00:29:00.138 ************************************ 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.138 ************************************ 00:29:00.138 START TEST nvmf_identify 00:29:00.138 ************************************ 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:00.138 * Looking for test storage... 00:29:00.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:00.138 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:29:00.139 06:21:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:02.038 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:02.038 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.038 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:02.038 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:02.039 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:02.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:02.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:29:02.039 00:29:02.039 --- 10.0.0.2 ping statistics --- 00:29:02.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.039 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:02.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:02.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:29:02.039 00:29:02.039 --- 10.0.0.1 ping statistics --- 00:29:02.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.039 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=244370 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 244370 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 244370 ']' 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:02.039 06:21:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:02.297 [2024-07-26 06:21:13.451420] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:02.297 [2024-07-26 06:21:13.451581] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:02.297 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.297 [2024-07-26 06:21:13.581116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:02.554 [2024-07-26 06:21:13.811151] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:02.554 [2024-07-26 06:21:13.811228] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:02.554 [2024-07-26 06:21:13.811266] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:02.554 [2024-07-26 06:21:13.811296] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:02.554 [2024-07-26 06:21:13.811325] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:02.554 [2024-07-26 06:21:13.811479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.554 [2024-07-26 06:21:13.811560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:02.554 [2024-07-26 06:21:13.811611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.554 [2024-07-26 06:21:13.811614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:03.119 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:03.119 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:29:03.119 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:03.119 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.119 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:03.119 [2024-07-26 06:21:14.385264] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:03.119 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.120 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:03.120 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:03.120 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:03.120 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:03.120 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.120 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:03.379 Malloc0 00:29:03.379 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.379 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:03.379 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.379 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:03.379 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.379 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:03.379 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.379 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:03.379 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.379 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:03.379 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.379 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:03.379 [2024-07-26 06:21:14.514841] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:03.379 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.379 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:03.379 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.379 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:03.379 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.379 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:03.379 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.379 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:03.379 [ 00:29:03.379 { 00:29:03.379 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:03.379 "subtype": "Discovery", 00:29:03.379 "listen_addresses": [ 00:29:03.379 { 00:29:03.379 "trtype": "TCP", 00:29:03.379 "adrfam": "IPv4", 00:29:03.379 "traddr": "10.0.0.2", 00:29:03.379 "trsvcid": "4420" 00:29:03.379 } 00:29:03.379 ], 00:29:03.379 "allow_any_host": true, 00:29:03.379 "hosts": [] 00:29:03.379 }, 00:29:03.379 { 00:29:03.379 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:03.379 "subtype": "NVMe", 00:29:03.379 "listen_addresses": [ 00:29:03.379 { 00:29:03.379 "trtype": "TCP", 00:29:03.379 "adrfam": "IPv4", 00:29:03.379 "traddr": "10.0.0.2", 00:29:03.379 "trsvcid": "4420" 00:29:03.379 } 00:29:03.379 ], 00:29:03.379 "allow_any_host": true, 00:29:03.379 "hosts": [], 00:29:03.379 "serial_number": "SPDK00000000000001", 00:29:03.379 "model_number": "SPDK bdev Controller", 00:29:03.379 "max_namespaces": 32, 00:29:03.379 "min_cntlid": 1, 00:29:03.379 "max_cntlid": 65519, 00:29:03.379 "namespaces": [ 00:29:03.379 { 00:29:03.379 "nsid": 1, 00:29:03.379 "bdev_name": "Malloc0", 00:29:03.379 "name": "Malloc0", 00:29:03.379 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:03.379 "eui64": "ABCDEF0123456789", 00:29:03.379 "uuid": "9c8da8ff-062d-4081-9209-e60565b325d8" 00:29:03.379 } 00:29:03.379 ] 00:29:03.379 } 00:29:03.379 ] 00:29:03.379 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.379 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:03.379 [2024-07-26 06:21:14.585666] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:03.379 [2024-07-26 06:21:14.585786] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid244524 ] 00:29:03.379 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.379 [2024-07-26 06:21:14.652120] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:29:03.379 [2024-07-26 06:21:14.652259] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:03.379 [2024-07-26 06:21:14.652281] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:03.379 [2024-07-26 06:21:14.652311] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:03.379 [2024-07-26 06:21:14.652338] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:03.379 [2024-07-26 06:21:14.652801] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:29:03.379 [2024-07-26 06:21:14.652882] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:29:03.379 [2024-07-26 06:21:14.659419] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:03.379 [2024-07-26 06:21:14.659453] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:03.379 [2024-07-26 06:21:14.659484] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:03.379 [2024-07-26 06:21:14.659496] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:03.379 [2024-07-26 06:21:14.659583] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.379 [2024-07-26 06:21:14.659608] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.379 [2024-07-26 06:21:14.659629] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:03.379 [2024-07-26 06:21:14.659667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:03.379 [2024-07-26 06:21:14.659710] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:03.379 [2024-07-26 06:21:14.667082] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.379 [2024-07-26 06:21:14.667113] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.379 [2024-07-26 06:21:14.667127] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.379 [2024-07-26 06:21:14.667142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:03.379 [2024-07-26 06:21:14.667191] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:03.379 [2024-07-26 06:21:14.667216] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:29:03.379 [2024-07-26 06:21:14.667234] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:29:03.380 [2024-07-26 06:21:14.667269] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.380 [2024-07-26 06:21:14.667288] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.380 [2024-07-26 06:21:14.667300] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:03.380 [2024-07-26 06:21:14.667322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.380 [2024-07-26 06:21:14.667357] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:03.380 [2024-07-26 06:21:14.667561] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.380 [2024-07-26 06:21:14.667584] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.380 [2024-07-26 06:21:14.667597] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.380 [2024-07-26 06:21:14.667609] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:03.380 [2024-07-26 06:21:14.667637] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:29:03.380 [2024-07-26 06:21:14.667664] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:29:03.380 [2024-07-26 06:21:14.667702] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.380 [2024-07-26 06:21:14.667717] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.380 [2024-07-26 06:21:14.667728] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:03.380 [2024-07-26 06:21:14.667752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.380 [2024-07-26 06:21:14.667786] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:03.380 [2024-07-26 06:21:14.667973] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.380 [2024-07-26 06:21:14.667993] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.380 [2024-07-26 06:21:14.668005] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.380 [2024-07-26 06:21:14.668016] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:03.380 [2024-07-26 06:21:14.668034] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:29:03.380 [2024-07-26 06:21:14.668066] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:29:03.380 [2024-07-26 06:21:14.668089] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.380 [2024-07-26 06:21:14.668103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.380 [2024-07-26 06:21:14.668122] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:03.380 [2024-07-26 06:21:14.668142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.380 [2024-07-26 06:21:14.668175] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:03.380 [2024-07-26 06:21:14.668346] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.380 [2024-07-26 06:21:14.668369] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.380 [2024-07-26 06:21:14.668381] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.380 [2024-07-26 06:21:14.668393] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:03.380 [2024-07-26 06:21:14.668414] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:03.380 [2024-07-26 06:21:14.668444] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.380 [2024-07-26 06:21:14.668460] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.380 [2024-07-26 06:21:14.668473] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:03.380 [2024-07-26 06:21:14.668507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.380 [2024-07-26 06:21:14.668540] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:03.380 [2024-07-26 06:21:14.668730] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.380 [2024-07-26 06:21:14.668751] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.380 [2024-07-26 06:21:14.668762] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.380 [2024-07-26 06:21:14.668774] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:03.380 [2024-07-26 06:21:14.668789] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:29:03.380 [2024-07-26 06:21:14.668817] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:29:03.380 [2024-07-26 06:21:14.668842] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:03.380 [2024-07-26 06:21:14.668961] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:29:03.380 [2024-07-26 06:21:14.668977] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:03.380 [2024-07-26 06:21:14.669003] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.380 [2024-07-26 06:21:14.669016] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.380 [2024-07-26 06:21:14.669029] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:03.380 [2024-07-26 06:21:14.669047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.380 [2024-07-26 06:21:14.669104] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:03.380 [2024-07-26 06:21:14.669275] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.380 [2024-07-26 06:21:14.669295] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.380 [2024-07-26 06:21:14.669311] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.380 [2024-07-26 06:21:14.669324] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:03.380 [2024-07-26 06:21:14.669339] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:03.380 [2024-07-26 06:21:14.669373] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.380 [2024-07-26 06:21:14.669390] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.380 [2024-07-26 06:21:14.669402] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:03.380 [2024-07-26 06:21:14.669437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.380 [2024-07-26 06:21:14.669469] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:03.380 [2024-07-26 06:21:14.669651] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.380 [2024-07-26 06:21:14.669672] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.380 [2024-07-26 06:21:14.669683] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.380 [2024-07-26 06:21:14.669694] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:03.380 [2024-07-26 06:21:14.669713] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:03.380 [2024-07-26 06:21:14.669740] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:29:03.380 [2024-07-26 06:21:14.669763] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:29:03.380 [2024-07-26 06:21:14.669805] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:29:03.380 [2024-07-26 06:21:14.669836] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.380 [2024-07-26 06:21:14.669856] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:03.380 [2024-07-26 06:21:14.669877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.380 [2024-07-26 06:21:14.669909] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:03.380 [2024-07-26 06:21:14.670168] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:03.380 [2024-07-26 06:21:14.670192] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:03.380 [2024-07-26 06:21:14.670205] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:03.380 [2024-07-26 06:21:14.670223] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:29:03.380 [2024-07-26 06:21:14.670239] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:29:03.380 [2024-07-26 06:21:14.670254] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.380 [2024-07-26 06:21:14.670276] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:03.380 [2024-07-26 06:21:14.670293] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:03.380 [2024-07-26 06:21:14.670314] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.380 [2024-07-26 06:21:14.670331] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.380 [2024-07-26 06:21:14.670342] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.380 [2024-07-26 06:21:14.670353] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:03.380 [2024-07-26 06:21:14.670386] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:29:03.380 [2024-07-26 06:21:14.670405] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:29:03.380 [2024-07-26 06:21:14.670428] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:29:03.380 [2024-07-26 06:21:14.670462] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:29:03.380 [2024-07-26 06:21:14.670476] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:29:03.380 [2024-07-26 06:21:14.670491] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:29:03.380 [2024-07-26 06:21:14.670529] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:29:03.380 [2024-07-26 06:21:14.670555] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.380 [2024-07-26 06:21:14.670569] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.381 [2024-07-26 06:21:14.670597] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:03.381 [2024-07-26 06:21:14.670618] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:03.381 [2024-07-26 06:21:14.670655] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:03.381 [2024-07-26 06:21:14.670857] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.381 [2024-07-26 06:21:14.670882] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.381 [2024-07-26 06:21:14.670895] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.381 [2024-07-26 06:21:14.670906] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:03.381 [2024-07-26 06:21:14.670932] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.381 [2024-07-26 06:21:14.670947] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.381 [2024-07-26 06:21:14.670959] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:03.381 [2024-07-26 06:21:14.670997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.381 [2024-07-26 06:21:14.671018] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.381 [2024-07-26 06:21:14.671029] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.381 [2024-07-26 06:21:14.671044] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:29:03.381 [2024-07-26 06:21:14.675073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.381 [2024-07-26 06:21:14.675098] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.381 [2024-07-26 06:21:14.675118] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.381 [2024-07-26 06:21:14.675130] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:29:03.381 [2024-07-26 06:21:14.675147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.381 [2024-07-26 06:21:14.675163] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.381 [2024-07-26 06:21:14.675174] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.381 [2024-07-26 06:21:14.675184] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:03.381 [2024-07-26 06:21:14.675200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.381 [2024-07-26 06:21:14.675214] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:29:03.381 [2024-07-26 06:21:14.675258] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:03.381 [2024-07-26 06:21:14.675284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.381 [2024-07-26 06:21:14.675298] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:03.381 [2024-07-26 06:21:14.675318] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.381 [2024-07-26 06:21:14.675377] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:03.381 [2024-07-26 06:21:14.675397] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:29:03.381 [2024-07-26 06:21:14.675425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:29:03.381 [2024-07-26 06:21:14.675438] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:03.381 [2024-07-26 06:21:14.675451] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:03.381 [2024-07-26 06:21:14.675677] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.381 [2024-07-26 06:21:14.675700] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.381 [2024-07-26 06:21:14.675713] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.381 [2024-07-26 06:21:14.675740] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:03.381 [2024-07-26 06:21:14.675760] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:29:03.381 [2024-07-26 06:21:14.675775] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:29:03.381 [2024-07-26 06:21:14.675815] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.381 [2024-07-26 06:21:14.675832] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:03.381 [2024-07-26 06:21:14.675852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.381 [2024-07-26 06:21:14.675882] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:03.381 [2024-07-26 06:21:14.676086] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:03.381 [2024-07-26 06:21:14.676109] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:03.381 [2024-07-26 06:21:14.676128] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:03.381 [2024-07-26 06:21:14.676146] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:29:03.381 [2024-07-26 06:21:14.676160] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:29:03.381 [2024-07-26 06:21:14.676172] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.381 [2024-07-26 06:21:14.676207] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:03.381 [2024-07-26 06:21:14.676223] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:03.381 [2024-07-26 06:21:14.676243] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.381 [2024-07-26 06:21:14.676260] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.381 [2024-07-26 06:21:14.676271] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.381 [2024-07-26 06:21:14.676284] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:03.381 [2024-07-26 06:21:14.676321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:29:03.381 [2024-07-26 06:21:14.676392] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.381 [2024-07-26 06:21:14.676426] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:03.381 [2024-07-26 06:21:14.676456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.381 [2024-07-26 06:21:14.676480] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.381 [2024-07-26 06:21:14.676493] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.381 [2024-07-26 06:21:14.676505] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:29:03.381 [2024-07-26 06:21:14.676522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.381 [2024-07-26 06:21:14.676556] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:03.381 [2024-07-26 06:21:14.676592] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:29:03.381 [2024-07-26 06:21:14.676950] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:03.381 [2024-07-26 06:21:14.676973] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:03.381 [2024-07-26 06:21:14.676985] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:03.381 [2024-07-26 06:21:14.676997] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=1024, cccid=4 00:29:03.381 [2024-07-26 06:21:14.677009] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=1024 00:29:03.381 [2024-07-26 06:21:14.677027] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.381 [2024-07-26 06:21:14.677069] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:03.381 [2024-07-26 06:21:14.677085] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:03.381 [2024-07-26 06:21:14.677105] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.381 [2024-07-26 06:21:14.677122] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.381 [2024-07-26 06:21:14.677140] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.381 [2024-07-26 06:21:14.677153] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:29:03.644 [2024-07-26 06:21:14.717213] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.644 [2024-07-26 06:21:14.717246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.644 [2024-07-26 06:21:14.717260] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.644 [2024-07-26 06:21:14.717273] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:03.644 [2024-07-26 06:21:14.717319] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.644 [2024-07-26 06:21:14.717337] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:03.644 [2024-07-26 06:21:14.717361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-07-26 06:21:14.717407] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:03.644 [2024-07-26 06:21:14.717602] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:03.644 [2024-07-26 06:21:14.717625] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:03.644 [2024-07-26 06:21:14.717637] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:03.644 [2024-07-26 06:21:14.717648] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=3072, cccid=4 00:29:03.644 [2024-07-26 06:21:14.717661] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=3072 00:29:03.644 [2024-07-26 06:21:14.717672] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.644 [2024-07-26 06:21:14.717690] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:03.644 [2024-07-26 06:21:14.717703] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:03.644 [2024-07-26 06:21:14.717722] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.644 [2024-07-26 06:21:14.717739] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.644 [2024-07-26 06:21:14.717750] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.644 [2024-07-26 06:21:14.717761] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:03.644 [2024-07-26 06:21:14.717790] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.644 [2024-07-26 06:21:14.717807] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:03.644 [2024-07-26 06:21:14.717835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-07-26 06:21:14.717904] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:03.644 [2024-07-26 06:21:14.718151] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:03.644 [2024-07-26 06:21:14.718173] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:03.644 [2024-07-26 06:21:14.718184] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:03.644 [2024-07-26 06:21:14.718195] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8, cccid=4 00:29:03.644 [2024-07-26 06:21:14.718207] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=8 00:29:03.644 [2024-07-26 06:21:14.718219] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.644 [2024-07-26 06:21:14.718235] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:03.644 [2024-07-26 06:21:14.718248] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:03.644 [2024-07-26 06:21:14.763095] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.644 [2024-07-26 06:21:14.763132] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.644 [2024-07-26 06:21:14.763145] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.644 [2024-07-26 06:21:14.763157] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:03.644 ===================================================== 00:29:03.644 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:03.644 ===================================================== 00:29:03.644 Controller Capabilities/Features 00:29:03.644 ================================ 00:29:03.644 Vendor ID: 0000 00:29:03.644 Subsystem Vendor ID: 0000 00:29:03.644 Serial Number: .................... 00:29:03.644 Model Number: ........................................ 00:29:03.644 Firmware Version: 24.09 00:29:03.644 Recommended Arb Burst: 0 00:29:03.644 IEEE OUI Identifier: 00 00 00 00:29:03.644 Multi-path I/O 00:29:03.644 May have multiple subsystem ports: No 00:29:03.644 May have multiple controllers: No 00:29:03.644 Associated with SR-IOV VF: No 00:29:03.644 Max Data Transfer Size: 131072 00:29:03.644 Max Number of Namespaces: 0 00:29:03.644 Max Number of I/O Queues: 1024 00:29:03.644 NVMe Specification Version (VS): 1.3 00:29:03.644 NVMe Specification Version (Identify): 1.3 00:29:03.644 Maximum Queue Entries: 128 00:29:03.644 Contiguous Queues Required: Yes 00:29:03.644 Arbitration Mechanisms Supported 00:29:03.644 Weighted Round Robin: Not Supported 00:29:03.644 Vendor Specific: Not Supported 00:29:03.644 Reset Timeout: 15000 ms 00:29:03.644 Doorbell Stride: 4 bytes 00:29:03.644 NVM Subsystem Reset: Not Supported 00:29:03.644 Command Sets Supported 00:29:03.644 NVM Command Set: Supported 00:29:03.644 Boot Partition: Not Supported 00:29:03.644 Memory Page Size Minimum: 4096 bytes 00:29:03.644 Memory Page Size Maximum: 4096 bytes 00:29:03.644 Persistent Memory Region: Not Supported 00:29:03.644 Optional Asynchronous Events Supported 00:29:03.644 Namespace Attribute Notices: Not Supported 00:29:03.644 Firmware Activation Notices: Not Supported 00:29:03.644 ANA Change Notices: Not Supported 00:29:03.644 PLE Aggregate Log Change Notices: Not Supported 00:29:03.644 LBA Status Info Alert Notices: Not Supported 00:29:03.644 EGE Aggregate Log Change Notices: Not Supported 00:29:03.644 Normal NVM Subsystem Shutdown event: Not Supported 00:29:03.644 Zone Descriptor Change Notices: Not Supported 00:29:03.644 Discovery Log Change Notices: Supported 00:29:03.644 Controller Attributes 00:29:03.644 128-bit Host Identifier: Not Supported 00:29:03.644 Non-Operational Permissive Mode: Not Supported 00:29:03.644 NVM Sets: Not Supported 00:29:03.644 Read Recovery Levels: Not Supported 00:29:03.644 Endurance Groups: Not Supported 00:29:03.644 Predictable Latency Mode: Not Supported 00:29:03.644 Traffic Based Keep ALive: Not Supported 00:29:03.644 Namespace Granularity: Not Supported 00:29:03.645 SQ Associations: Not Supported 00:29:03.645 UUID List: Not Supported 00:29:03.645 Multi-Domain Subsystem: Not Supported 00:29:03.645 Fixed Capacity Management: Not Supported 00:29:03.645 Variable Capacity Management: Not Supported 00:29:03.645 Delete Endurance Group: Not Supported 00:29:03.645 Delete NVM Set: Not Supported 00:29:03.645 Extended LBA Formats Supported: Not Supported 00:29:03.645 Flexible Data Placement Supported: Not Supported 00:29:03.645 00:29:03.645 Controller Memory Buffer Support 00:29:03.645 ================================ 00:29:03.645 Supported: No 00:29:03.645 00:29:03.645 Persistent Memory Region Support 00:29:03.645 ================================ 00:29:03.645 Supported: No 00:29:03.645 00:29:03.645 Admin Command Set Attributes 00:29:03.645 ============================ 00:29:03.645 Security Send/Receive: Not Supported 00:29:03.645 Format NVM: Not Supported 00:29:03.645 Firmware Activate/Download: Not Supported 00:29:03.645 Namespace Management: Not Supported 00:29:03.645 Device Self-Test: Not Supported 00:29:03.645 Directives: Not Supported 00:29:03.645 NVMe-MI: Not Supported 00:29:03.645 Virtualization Management: Not Supported 00:29:03.645 Doorbell Buffer Config: Not Supported 00:29:03.645 Get LBA Status Capability: Not Supported 00:29:03.645 Command & Feature Lockdown Capability: Not Supported 00:29:03.645 Abort Command Limit: 1 00:29:03.645 Async Event Request Limit: 4 00:29:03.645 Number of Firmware Slots: N/A 00:29:03.645 Firmware Slot 1 Read-Only: N/A 00:29:03.645 Firmware Activation Without Reset: N/A 00:29:03.645 Multiple Update Detection Support: N/A 00:29:03.645 Firmware Update Granularity: No Information Provided 00:29:03.645 Per-Namespace SMART Log: No 00:29:03.645 Asymmetric Namespace Access Log Page: Not Supported 00:29:03.645 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:03.645 Command Effects Log Page: Not Supported 00:29:03.645 Get Log Page Extended Data: Supported 00:29:03.645 Telemetry Log Pages: Not Supported 00:29:03.645 Persistent Event Log Pages: Not Supported 00:29:03.645 Supported Log Pages Log Page: May Support 00:29:03.645 Commands Supported & Effects Log Page: Not Supported 00:29:03.645 Feature Identifiers & Effects Log Page:May Support 00:29:03.645 NVMe-MI Commands & Effects Log Page: May Support 00:29:03.645 Data Area 4 for Telemetry Log: Not Supported 00:29:03.645 Error Log Page Entries Supported: 128 00:29:03.645 Keep Alive: Not Supported 00:29:03.645 00:29:03.645 NVM Command Set Attributes 00:29:03.645 ========================== 00:29:03.645 Submission Queue Entry Size 00:29:03.645 Max: 1 00:29:03.645 Min: 1 00:29:03.645 Completion Queue Entry Size 00:29:03.645 Max: 1 00:29:03.645 Min: 1 00:29:03.645 Number of Namespaces: 0 00:29:03.645 Compare Command: Not Supported 00:29:03.645 Write Uncorrectable Command: Not Supported 00:29:03.645 Dataset Management Command: Not Supported 00:29:03.645 Write Zeroes Command: Not Supported 00:29:03.645 Set Features Save Field: Not Supported 00:29:03.645 Reservations: Not Supported 00:29:03.645 Timestamp: Not Supported 00:29:03.645 Copy: Not Supported 00:29:03.645 Volatile Write Cache: Not Present 00:29:03.645 Atomic Write Unit (Normal): 1 00:29:03.645 Atomic Write Unit (PFail): 1 00:29:03.645 Atomic Compare & Write Unit: 1 00:29:03.645 Fused Compare & Write: Supported 00:29:03.645 Scatter-Gather List 00:29:03.645 SGL Command Set: Supported 00:29:03.645 SGL Keyed: Supported 00:29:03.645 SGL Bit Bucket Descriptor: Not Supported 00:29:03.645 SGL Metadata Pointer: Not Supported 00:29:03.645 Oversized SGL: Not Supported 00:29:03.645 SGL Metadata Address: Not Supported 00:29:03.645 SGL Offset: Supported 00:29:03.645 Transport SGL Data Block: Not Supported 00:29:03.645 Replay Protected Memory Block: Not Supported 00:29:03.645 00:29:03.645 Firmware Slot Information 00:29:03.645 ========================= 00:29:03.645 Active slot: 0 00:29:03.645 00:29:03.645 00:29:03.645 Error Log 00:29:03.645 ========= 00:29:03.645 00:29:03.645 Active Namespaces 00:29:03.645 ================= 00:29:03.645 Discovery Log Page 00:29:03.645 ================== 00:29:03.645 Generation Counter: 2 00:29:03.645 Number of Records: 2 00:29:03.645 Record Format: 0 00:29:03.645 00:29:03.645 Discovery Log Entry 0 00:29:03.645 ---------------------- 00:29:03.645 Transport Type: 3 (TCP) 00:29:03.645 Address Family: 1 (IPv4) 00:29:03.645 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:03.645 Entry Flags: 00:29:03.645 Duplicate Returned Information: 1 00:29:03.645 Explicit Persistent Connection Support for Discovery: 1 00:29:03.645 Transport Requirements: 00:29:03.645 Secure Channel: Not Required 00:29:03.645 Port ID: 0 (0x0000) 00:29:03.645 Controller ID: 65535 (0xffff) 00:29:03.645 Admin Max SQ Size: 128 00:29:03.645 Transport Service Identifier: 4420 00:29:03.645 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:03.645 Transport Address: 10.0.0.2 00:29:03.645 Discovery Log Entry 1 00:29:03.645 ---------------------- 00:29:03.645 Transport Type: 3 (TCP) 00:29:03.645 Address Family: 1 (IPv4) 00:29:03.645 Subsystem Type: 2 (NVM Subsystem) 00:29:03.645 Entry Flags: 00:29:03.645 Duplicate Returned Information: 0 00:29:03.645 Explicit Persistent Connection Support for Discovery: 0 00:29:03.645 Transport Requirements: 00:29:03.645 Secure Channel: Not Required 00:29:03.645 Port ID: 0 (0x0000) 00:29:03.645 Controller ID: 65535 (0xffff) 00:29:03.645 Admin Max SQ Size: 128 00:29:03.645 Transport Service Identifier: 4420 00:29:03.645 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:03.645 Transport Address: 10.0.0.2 [2024-07-26 06:21:14.763379] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:29:03.645 [2024-07-26 06:21:14.763429] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:03.645 [2024-07-26 06:21:14.763453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-07-26 06:21:14.763473] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:29:03.645 [2024-07-26 06:21:14.763488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-07-26 06:21:14.763500] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:29:03.645 [2024-07-26 06:21:14.763514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-07-26 06:21:14.763526] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:03.645 [2024-07-26 06:21:14.763540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-07-26 06:21:14.763563] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.645 [2024-07-26 06:21:14.763577] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.645 [2024-07-26 06:21:14.763590] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:03.645 [2024-07-26 06:21:14.763612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-07-26 06:21:14.763656] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:03.645 [2024-07-26 06:21:14.763834] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.645 [2024-07-26 06:21:14.763856] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.645 [2024-07-26 06:21:14.763869] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.645 [2024-07-26 06:21:14.763882] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:03.645 [2024-07-26 06:21:14.763910] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.645 [2024-07-26 06:21:14.763925] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.645 [2024-07-26 06:21:14.763938] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:03.645 [2024-07-26 06:21:14.763962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-07-26 06:21:14.764025] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:03.645 [2024-07-26 06:21:14.764234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.645 [2024-07-26 06:21:14.764256] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.645 [2024-07-26 06:21:14.764268] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.645 [2024-07-26 06:21:14.764279] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:03.645 [2024-07-26 06:21:14.764295] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:29:03.646 [2024-07-26 06:21:14.764311] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:29:03.646 [2024-07-26 06:21:14.764347] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.764379] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.764391] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:03.646 [2024-07-26 06:21:14.764410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-07-26 06:21:14.764442] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:03.646 [2024-07-26 06:21:14.764612] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.646 [2024-07-26 06:21:14.764632] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.646 [2024-07-26 06:21:14.764648] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.764660] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:03.646 [2024-07-26 06:21:14.764689] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.764705] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.764716] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:03.646 [2024-07-26 06:21:14.764734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-07-26 06:21:14.764780] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:03.646 [2024-07-26 06:21:14.764990] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.646 [2024-07-26 06:21:14.765013] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.646 [2024-07-26 06:21:14.765025] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.765036] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:03.646 [2024-07-26 06:21:14.765073] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.765090] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.765101] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:03.646 [2024-07-26 06:21:14.765119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-07-26 06:21:14.765151] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:03.646 [2024-07-26 06:21:14.765318] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.646 [2024-07-26 06:21:14.765341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.646 [2024-07-26 06:21:14.765352] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.765363] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:03.646 [2024-07-26 06:21:14.765390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.765405] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.765416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:03.646 [2024-07-26 06:21:14.765435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-07-26 06:21:14.765480] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:03.646 [2024-07-26 06:21:14.765689] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.646 [2024-07-26 06:21:14.765712] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.646 [2024-07-26 06:21:14.765724] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.765735] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:03.646 [2024-07-26 06:21:14.765762] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.765777] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.765789] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:03.646 [2024-07-26 06:21:14.765807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-07-26 06:21:14.765853] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:03.646 [2024-07-26 06:21:14.766071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.646 [2024-07-26 06:21:14.766100] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.646 [2024-07-26 06:21:14.766114] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.766128] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:03.646 [2024-07-26 06:21:14.766156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.766171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.766182] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:03.646 [2024-07-26 06:21:14.766200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-07-26 06:21:14.766231] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:03.646 [2024-07-26 06:21:14.766393] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.646 [2024-07-26 06:21:14.766413] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.646 [2024-07-26 06:21:14.766425] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.766436] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:03.646 [2024-07-26 06:21:14.766462] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.766476] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.766487] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:03.646 [2024-07-26 06:21:14.766505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-07-26 06:21:14.766536] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:03.646 [2024-07-26 06:21:14.766766] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.646 [2024-07-26 06:21:14.766786] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.646 [2024-07-26 06:21:14.766798] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.766809] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:03.646 [2024-07-26 06:21:14.766835] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.766850] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.766861] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:03.646 [2024-07-26 06:21:14.766884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-07-26 06:21:14.766931] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:03.646 [2024-07-26 06:21:14.771076] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.646 [2024-07-26 06:21:14.771102] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.646 [2024-07-26 06:21:14.771114] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.771124] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:03.646 [2024-07-26 06:21:14.771167] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.771183] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.771195] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:03.646 [2024-07-26 06:21:14.771213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-07-26 06:21:14.771246] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:03.646 [2024-07-26 06:21:14.771427] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.646 [2024-07-26 06:21:14.771447] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.646 [2024-07-26 06:21:14.771459] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.646 [2024-07-26 06:21:14.771474] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:03.646 [2024-07-26 06:21:14.771498] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:29:03.646 00:29:03.646 06:21:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:03.646 [2024-07-26 06:21:14.873836] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:03.646 [2024-07-26 06:21:14.873933] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid244527 ] 00:29:03.646 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.646 [2024-07-26 06:21:14.931524] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:29:03.646 [2024-07-26 06:21:14.931643] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:03.646 [2024-07-26 06:21:14.931664] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:03.646 [2024-07-26 06:21:14.931695] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:03.646 [2024-07-26 06:21:14.931721] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:03.646 [2024-07-26 06:21:14.935144] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:29:03.646 [2024-07-26 06:21:14.935220] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:29:03.646 [2024-07-26 06:21:14.943088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:03.646 [2024-07-26 06:21:14.943129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:03.646 [2024-07-26 06:21:14.943145] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:03.647 [2024-07-26 06:21:14.943156] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:03.647 [2024-07-26 06:21:14.943235] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.647 [2024-07-26 06:21:14.943258] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.647 [2024-07-26 06:21:14.943279] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:03.647 [2024-07-26 06:21:14.943313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:03.647 [2024-07-26 06:21:14.943369] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:03.647 [2024-07-26 06:21:14.951099] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.647 [2024-07-26 06:21:14.951126] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.647 [2024-07-26 06:21:14.951139] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.647 [2024-07-26 06:21:14.951153] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:03.647 [2024-07-26 06:21:14.951200] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:03.647 [2024-07-26 06:21:14.951230] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:29:03.647 [2024-07-26 06:21:14.951248] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:29:03.647 [2024-07-26 06:21:14.951281] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.647 [2024-07-26 06:21:14.951305] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.647 [2024-07-26 06:21:14.951319] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:03.647 [2024-07-26 06:21:14.951360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-07-26 06:21:14.951398] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:03.647 [2024-07-26 06:21:14.951598] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.647 [2024-07-26 06:21:14.951627] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.647 [2024-07-26 06:21:14.951647] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.647 [2024-07-26 06:21:14.951661] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:03.647 [2024-07-26 06:21:14.951683] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:29:03.647 [2024-07-26 06:21:14.951707] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:29:03.647 [2024-07-26 06:21:14.951752] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.647 [2024-07-26 06:21:14.951766] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.647 [2024-07-26 06:21:14.951778] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:03.647 [2024-07-26 06:21:14.951802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-07-26 06:21:14.951836] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:03.647 [2024-07-26 06:21:14.952022] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.647 [2024-07-26 06:21:14.952043] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.647 [2024-07-26 06:21:14.952055] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.647 [2024-07-26 06:21:14.952080] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:03.647 [2024-07-26 06:21:14.952098] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:29:03.647 [2024-07-26 06:21:14.952122] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:29:03.647 [2024-07-26 06:21:14.952144] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.647 [2024-07-26 06:21:14.952158] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.647 [2024-07-26 06:21:14.952171] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:03.647 [2024-07-26 06:21:14.952196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-07-26 06:21:14.952229] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:03.647 [2024-07-26 06:21:14.952407] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.647 [2024-07-26 06:21:14.952429] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.647 [2024-07-26 06:21:14.952440] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.647 [2024-07-26 06:21:14.952452] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:03.647 [2024-07-26 06:21:14.952468] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:03.647 [2024-07-26 06:21:14.952496] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.647 [2024-07-26 06:21:14.952517] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.647 [2024-07-26 06:21:14.952530] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:03.647 [2024-07-26 06:21:14.952566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-07-26 06:21:14.952602] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:03.647 [2024-07-26 06:21:14.952782] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.647 [2024-07-26 06:21:14.952804] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.647 [2024-07-26 06:21:14.952816] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.647 [2024-07-26 06:21:14.952827] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:03.647 [2024-07-26 06:21:14.952842] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:29:03.647 [2024-07-26 06:21:14.952858] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:29:03.647 [2024-07-26 06:21:14.952886] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:03.647 [2024-07-26 06:21:14.953005] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:29:03.647 [2024-07-26 06:21:14.953019] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:03.647 [2024-07-26 06:21:14.953065] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.647 [2024-07-26 06:21:14.953083] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.647 [2024-07-26 06:21:14.953096] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:03.647 [2024-07-26 06:21:14.953120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-07-26 06:21:14.953169] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:03.647 [2024-07-26 06:21:14.953401] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.647 [2024-07-26 06:21:14.953422] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.647 [2024-07-26 06:21:14.953433] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.647 [2024-07-26 06:21:14.953445] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:03.647 [2024-07-26 06:21:14.953460] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:03.647 [2024-07-26 06:21:14.953493] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.647 [2024-07-26 06:21:14.953513] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.647 [2024-07-26 06:21:14.953525] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:03.647 [2024-07-26 06:21:14.953560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-07-26 06:21:14.953595] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:03.647 [2024-07-26 06:21:14.953773] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.647 [2024-07-26 06:21:14.953795] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.647 [2024-07-26 06:21:14.953806] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.647 [2024-07-26 06:21:14.953818] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:03.647 [2024-07-26 06:21:14.953832] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:03.647 [2024-07-26 06:21:14.953861] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:29:03.647 [2024-07-26 06:21:14.953885] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:29:03.647 [2024-07-26 06:21:14.953930] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:29:03.647 [2024-07-26 06:21:14.953960] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.647 [2024-07-26 06:21:14.953975] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:03.647 [2024-07-26 06:21:14.953996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-07-26 06:21:14.954037] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:03.647 [2024-07-26 06:21:14.954287] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:03.647 [2024-07-26 06:21:14.954308] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:03.647 [2024-07-26 06:21:14.954320] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:03.647 [2024-07-26 06:21:14.954333] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:29:03.647 [2024-07-26 06:21:14.954347] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:29:03.647 [2024-07-26 06:21:14.954362] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.647 [2024-07-26 06:21:14.954396] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:03.647 [2024-07-26 06:21:14.954414] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:03.910 [2024-07-26 06:21:14.999085] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.910 [2024-07-26 06:21:14.999125] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.910 [2024-07-26 06:21:14.999139] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.910 [2024-07-26 06:21:14.999155] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:03.910 [2024-07-26 06:21:14.999186] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:29:03.910 [2024-07-26 06:21:14.999205] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:29:03.910 [2024-07-26 06:21:14.999218] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:29:03.910 [2024-07-26 06:21:14.999233] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:29:03.910 [2024-07-26 06:21:14.999251] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:29:03.910 [2024-07-26 06:21:14.999266] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:29:03.910 [2024-07-26 06:21:14.999291] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:29:03.910 [2024-07-26 06:21:14.999317] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.910 [2024-07-26 06:21:14.999333] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.910 [2024-07-26 06:21:14.999361] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:03.910 [2024-07-26 06:21:14.999398] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:03.910 [2024-07-26 06:21:14.999449] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:03.910 [2024-07-26 06:21:14.999604] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.910 [2024-07-26 06:21:14.999627] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.910 [2024-07-26 06:21:14.999638] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.910 [2024-07-26 06:21:14.999649] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:03.910 [2024-07-26 06:21:14.999676] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.910 [2024-07-26 06:21:14.999691] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.910 [2024-07-26 06:21:14.999710] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:03.910 [2024-07-26 06:21:14.999730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.910 [2024-07-26 06:21:14.999766] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.910 [2024-07-26 06:21:14.999778] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.910 [2024-07-26 06:21:14.999788] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:29:03.910 [2024-07-26 06:21:14.999804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.910 [2024-07-26 06:21:14.999820] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.910 [2024-07-26 06:21:14.999832] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.910 [2024-07-26 06:21:14.999842] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:29:03.910 [2024-07-26 06:21:14.999858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.910 [2024-07-26 06:21:14.999887] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.910 [2024-07-26 06:21:14.999899] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.910 [2024-07-26 06:21:14.999909] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:03.910 [2024-07-26 06:21:14.999925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.910 [2024-07-26 06:21:14.999939] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:03.910 [2024-07-26 06:21:14.999981] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:03.910 [2024-07-26 06:21:15.000008] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.910 [2024-07-26 06:21:15.000022] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:03.910 [2024-07-26 06:21:15.000056] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.910 [2024-07-26 06:21:15.000113] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:03.910 [2024-07-26 06:21:15.000132] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:29:03.910 [2024-07-26 06:21:15.000145] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:29:03.910 [2024-07-26 06:21:15.000158] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:03.910 [2024-07-26 06:21:15.000170] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:03.910 [2024-07-26 06:21:15.000372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.910 [2024-07-26 06:21:15.000394] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.910 [2024-07-26 06:21:15.000406] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.911 [2024-07-26 06:21:15.000417] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:03.911 [2024-07-26 06:21:15.000450] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:29:03.911 [2024-07-26 06:21:15.000465] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:03.911 [2024-07-26 06:21:15.000493] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:29:03.911 [2024-07-26 06:21:15.000519] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:03.911 [2024-07-26 06:21:15.000537] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.911 [2024-07-26 06:21:15.000551] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.911 [2024-07-26 06:21:15.000563] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:03.911 [2024-07-26 06:21:15.000583] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:03.911 [2024-07-26 06:21:15.000615] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:03.911 [2024-07-26 06:21:15.000789] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.911 [2024-07-26 06:21:15.000810] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.911 [2024-07-26 06:21:15.000821] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.911 [2024-07-26 06:21:15.000833] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:03.911 [2024-07-26 06:21:15.000941] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:29:03.911 [2024-07-26 06:21:15.000985] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:03.911 [2024-07-26 06:21:15.001015] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.911 [2024-07-26 06:21:15.001030] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:03.911 [2024-07-26 06:21:15.001074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.911 [2024-07-26 06:21:15.001124] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:03.911 [2024-07-26 06:21:15.001325] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:03.911 [2024-07-26 06:21:15.001346] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:03.911 [2024-07-26 06:21:15.001357] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:03.911 [2024-07-26 06:21:15.001376] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:29:03.911 [2024-07-26 06:21:15.001389] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:29:03.911 [2024-07-26 06:21:15.001401] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.911 [2024-07-26 06:21:15.001425] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:03.911 [2024-07-26 06:21:15.001440] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:03.911 [2024-07-26 06:21:15.001459] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.911 [2024-07-26 06:21:15.001475] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.911 [2024-07-26 06:21:15.001487] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.911 [2024-07-26 06:21:15.001498] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:03.911 [2024-07-26 06:21:15.001543] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:29:03.911 [2024-07-26 06:21:15.001578] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:29:03.911 [2024-07-26 06:21:15.001616] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:29:03.911 [2024-07-26 06:21:15.001659] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.911 [2024-07-26 06:21:15.001677] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:03.911 [2024-07-26 06:21:15.001698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.911 [2024-07-26 06:21:15.001730] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:03.911 [2024-07-26 06:21:15.001958] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:03.911 [2024-07-26 06:21:15.001981] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:03.911 [2024-07-26 06:21:15.001993] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:03.911 [2024-07-26 06:21:15.002004] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:29:03.911 [2024-07-26 06:21:15.002016] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:29:03.911 [2024-07-26 06:21:15.002028] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.911 [2024-07-26 06:21:15.002046] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:03.911 [2024-07-26 06:21:15.002066] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:03.911 [2024-07-26 06:21:15.002088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.911 [2024-07-26 06:21:15.002115] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.911 [2024-07-26 06:21:15.002126] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.911 [2024-07-26 06:21:15.002137] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:03.911 [2024-07-26 06:21:15.002178] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:03.911 [2024-07-26 06:21:15.002209] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:03.911 [2024-07-26 06:21:15.002237] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.911 [2024-07-26 06:21:15.002260] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:03.911 [2024-07-26 06:21:15.002280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.911 [2024-07-26 06:21:15.002313] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:03.911 [2024-07-26 06:21:15.002512] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:03.911 [2024-07-26 06:21:15.002534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:03.911 [2024-07-26 06:21:15.002545] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:03.911 [2024-07-26 06:21:15.002556] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:29:03.911 [2024-07-26 06:21:15.002569] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:29:03.911 [2024-07-26 06:21:15.002581] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.911 [2024-07-26 06:21:15.002599] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:03.911 [2024-07-26 06:21:15.002624] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:03.911 [2024-07-26 06:21:15.002644] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.911 [2024-07-26 06:21:15.002661] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.911 [2024-07-26 06:21:15.002673] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.911 [2024-07-26 06:21:15.002684] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:03.911 [2024-07-26 06:21:15.002718] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:03.911 [2024-07-26 06:21:15.002749] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:29:03.911 [2024-07-26 06:21:15.002773] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:29:03.911 [2024-07-26 06:21:15.002793] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:03.911 [2024-07-26 06:21:15.002824] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:03.911 [2024-07-26 06:21:15.002840] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:29:03.911 [2024-07-26 06:21:15.002854] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:29:03.911 [2024-07-26 06:21:15.002867] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:29:03.911 [2024-07-26 06:21:15.002882] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:29:03.911 [2024-07-26 06:21:15.002939] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.911 [2024-07-26 06:21:15.002956] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:03.911 [2024-07-26 06:21:15.002976] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.911 [2024-07-26 06:21:15.003002] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.911 [2024-07-26 06:21:15.003031] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.911 [2024-07-26 06:21:15.003043] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:29:03.911 [2024-07-26 06:21:15.007074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.911 [2024-07-26 06:21:15.007118] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:03.911 [2024-07-26 06:21:15.007155] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:29:03.912 [2024-07-26 06:21:15.007337] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.912 [2024-07-26 06:21:15.007358] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.912 [2024-07-26 06:21:15.007371] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.007390] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:03.912 [2024-07-26 06:21:15.007415] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.912 [2024-07-26 06:21:15.007447] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.912 [2024-07-26 06:21:15.007459] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.007471] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:29:03.912 [2024-07-26 06:21:15.007497] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.007513] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:29:03.912 [2024-07-26 06:21:15.007532] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.912 [2024-07-26 06:21:15.007563] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:29:03.912 [2024-07-26 06:21:15.007747] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.912 [2024-07-26 06:21:15.007771] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.912 [2024-07-26 06:21:15.007782] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.007798] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:29:03.912 [2024-07-26 06:21:15.007826] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.007842] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:29:03.912 [2024-07-26 06:21:15.007866] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.912 [2024-07-26 06:21:15.007914] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:29:03.912 [2024-07-26 06:21:15.008118] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.912 [2024-07-26 06:21:15.008140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.912 [2024-07-26 06:21:15.008152] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.008164] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:29:03.912 [2024-07-26 06:21:15.008190] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.008205] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:29:03.912 [2024-07-26 06:21:15.008225] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.912 [2024-07-26 06:21:15.008257] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:29:03.912 [2024-07-26 06:21:15.008417] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.912 [2024-07-26 06:21:15.008437] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.912 [2024-07-26 06:21:15.008449] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.008460] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:29:03.912 [2024-07-26 06:21:15.008504] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.008523] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:29:03.912 [2024-07-26 06:21:15.008544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.912 [2024-07-26 06:21:15.008582] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.008597] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:03.912 [2024-07-26 06:21:15.008615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.912 [2024-07-26 06:21:15.008636] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.008650] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000015700) 00:29:03.912 [2024-07-26 06:21:15.008669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.912 [2024-07-26 06:21:15.008695] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.008714] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:29:03.912 [2024-07-26 06:21:15.008733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.912 [2024-07-26 06:21:15.008781] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:29:03.912 [2024-07-26 06:21:15.008799] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:03.912 [2024-07-26 06:21:15.008827] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:29:03.912 [2024-07-26 06:21:15.008846] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:29:03.912 [2024-07-26 06:21:15.009162] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:03.912 [2024-07-26 06:21:15.009186] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:03.912 [2024-07-26 06:21:15.009199] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.009211] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8192, cccid=5 00:29:03.912 [2024-07-26 06:21:15.009225] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x615000015700): expected_datao=0, payload_size=8192 00:29:03.912 [2024-07-26 06:21:15.009238] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.009271] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.009287] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.009307] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:03.912 [2024-07-26 06:21:15.009325] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:03.912 [2024-07-26 06:21:15.009336] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.009347] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=4 00:29:03.912 [2024-07-26 06:21:15.009359] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:29:03.912 [2024-07-26 06:21:15.009371] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.009393] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.009408] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.009429] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:03.912 [2024-07-26 06:21:15.009446] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:03.912 [2024-07-26 06:21:15.009457] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.009468] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=6 00:29:03.912 [2024-07-26 06:21:15.009480] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:29:03.912 [2024-07-26 06:21:15.009492] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.009509] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.009521] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.009552] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:03.912 [2024-07-26 06:21:15.009568] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:03.912 [2024-07-26 06:21:15.009578] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.009588] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=7 00:29:03.912 [2024-07-26 06:21:15.009600] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:29:03.912 [2024-07-26 06:21:15.009611] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.009644] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.009656] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.009673] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.912 [2024-07-26 06:21:15.009689] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.912 [2024-07-26 06:21:15.009700] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.009716] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:29:03.912 [2024-07-26 06:21:15.009751] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.912 [2024-07-26 06:21:15.009773] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.912 [2024-07-26 06:21:15.009784] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.009795] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:03.912 [2024-07-26 06:21:15.009819] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.912 [2024-07-26 06:21:15.009836] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.912 [2024-07-26 06:21:15.009847] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.009857] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x615000015700 00:29:03.912 [2024-07-26 06:21:15.009880] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.912 [2024-07-26 06:21:15.009897] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.912 [2024-07-26 06:21:15.009908] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.912 [2024-07-26 06:21:15.009918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:29:03.912 ===================================================== 00:29:03.913 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:03.913 ===================================================== 00:29:03.913 Controller Capabilities/Features 00:29:03.913 ================================ 00:29:03.913 Vendor ID: 8086 00:29:03.913 Subsystem Vendor ID: 8086 00:29:03.913 Serial Number: SPDK00000000000001 00:29:03.913 Model Number: SPDK bdev Controller 00:29:03.913 Firmware Version: 24.09 00:29:03.913 Recommended Arb Burst: 6 00:29:03.913 IEEE OUI Identifier: e4 d2 5c 00:29:03.913 Multi-path I/O 00:29:03.913 May have multiple subsystem ports: Yes 00:29:03.913 May have multiple controllers: Yes 00:29:03.913 Associated with SR-IOV VF: No 00:29:03.913 Max Data Transfer Size: 131072 00:29:03.913 Max Number of Namespaces: 32 00:29:03.913 Max Number of I/O Queues: 127 00:29:03.913 NVMe Specification Version (VS): 1.3 00:29:03.913 NVMe Specification Version (Identify): 1.3 00:29:03.913 Maximum Queue Entries: 128 00:29:03.913 Contiguous Queues Required: Yes 00:29:03.913 Arbitration Mechanisms Supported 00:29:03.913 Weighted Round Robin: Not Supported 00:29:03.913 Vendor Specific: Not Supported 00:29:03.913 Reset Timeout: 15000 ms 00:29:03.913 Doorbell Stride: 4 bytes 00:29:03.913 NVM Subsystem Reset: Not Supported 00:29:03.913 Command Sets Supported 00:29:03.913 NVM Command Set: Supported 00:29:03.913 Boot Partition: Not Supported 00:29:03.913 Memory Page Size Minimum: 4096 bytes 00:29:03.913 Memory Page Size Maximum: 4096 bytes 00:29:03.913 Persistent Memory Region: Not Supported 00:29:03.913 Optional Asynchronous Events Supported 00:29:03.913 Namespace Attribute Notices: Supported 00:29:03.913 Firmware Activation Notices: Not Supported 00:29:03.913 ANA Change Notices: Not Supported 00:29:03.913 PLE Aggregate Log Change Notices: Not Supported 00:29:03.913 LBA Status Info Alert Notices: Not Supported 00:29:03.913 EGE Aggregate Log Change Notices: Not Supported 00:29:03.913 Normal NVM Subsystem Shutdown event: Not Supported 00:29:03.913 Zone Descriptor Change Notices: Not Supported 00:29:03.913 Discovery Log Change Notices: Not Supported 00:29:03.913 Controller Attributes 00:29:03.913 128-bit Host Identifier: Supported 00:29:03.913 Non-Operational Permissive Mode: Not Supported 00:29:03.913 NVM Sets: Not Supported 00:29:03.913 Read Recovery Levels: Not Supported 00:29:03.913 Endurance Groups: Not Supported 00:29:03.913 Predictable Latency Mode: Not Supported 00:29:03.913 Traffic Based Keep ALive: Not Supported 00:29:03.913 Namespace Granularity: Not Supported 00:29:03.913 SQ Associations: Not Supported 00:29:03.913 UUID List: Not Supported 00:29:03.913 Multi-Domain Subsystem: Not Supported 00:29:03.913 Fixed Capacity Management: Not Supported 00:29:03.913 Variable Capacity Management: Not Supported 00:29:03.913 Delete Endurance Group: Not Supported 00:29:03.913 Delete NVM Set: Not Supported 00:29:03.913 Extended LBA Formats Supported: Not Supported 00:29:03.913 Flexible Data Placement Supported: Not Supported 00:29:03.913 00:29:03.913 Controller Memory Buffer Support 00:29:03.913 ================================ 00:29:03.913 Supported: No 00:29:03.913 00:29:03.913 Persistent Memory Region Support 00:29:03.913 ================================ 00:29:03.913 Supported: No 00:29:03.913 00:29:03.913 Admin Command Set Attributes 00:29:03.913 ============================ 00:29:03.913 Security Send/Receive: Not Supported 00:29:03.913 Format NVM: Not Supported 00:29:03.913 Firmware Activate/Download: Not Supported 00:29:03.913 Namespace Management: Not Supported 00:29:03.913 Device Self-Test: Not Supported 00:29:03.913 Directives: Not Supported 00:29:03.913 NVMe-MI: Not Supported 00:29:03.913 Virtualization Management: Not Supported 00:29:03.913 Doorbell Buffer Config: Not Supported 00:29:03.913 Get LBA Status Capability: Not Supported 00:29:03.913 Command & Feature Lockdown Capability: Not Supported 00:29:03.913 Abort Command Limit: 4 00:29:03.913 Async Event Request Limit: 4 00:29:03.913 Number of Firmware Slots: N/A 00:29:03.913 Firmware Slot 1 Read-Only: N/A 00:29:03.913 Firmware Activation Without Reset: N/A 00:29:03.913 Multiple Update Detection Support: N/A 00:29:03.913 Firmware Update Granularity: No Information Provided 00:29:03.913 Per-Namespace SMART Log: No 00:29:03.913 Asymmetric Namespace Access Log Page: Not Supported 00:29:03.913 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:03.913 Command Effects Log Page: Supported 00:29:03.913 Get Log Page Extended Data: Supported 00:29:03.913 Telemetry Log Pages: Not Supported 00:29:03.913 Persistent Event Log Pages: Not Supported 00:29:03.913 Supported Log Pages Log Page: May Support 00:29:03.913 Commands Supported & Effects Log Page: Not Supported 00:29:03.913 Feature Identifiers & Effects Log Page:May Support 00:29:03.913 NVMe-MI Commands & Effects Log Page: May Support 00:29:03.913 Data Area 4 for Telemetry Log: Not Supported 00:29:03.913 Error Log Page Entries Supported: 128 00:29:03.913 Keep Alive: Supported 00:29:03.913 Keep Alive Granularity: 10000 ms 00:29:03.913 00:29:03.913 NVM Command Set Attributes 00:29:03.913 ========================== 00:29:03.913 Submission Queue Entry Size 00:29:03.913 Max: 64 00:29:03.913 Min: 64 00:29:03.913 Completion Queue Entry Size 00:29:03.913 Max: 16 00:29:03.913 Min: 16 00:29:03.913 Number of Namespaces: 32 00:29:03.913 Compare Command: Supported 00:29:03.913 Write Uncorrectable Command: Not Supported 00:29:03.913 Dataset Management Command: Supported 00:29:03.913 Write Zeroes Command: Supported 00:29:03.913 Set Features Save Field: Not Supported 00:29:03.913 Reservations: Supported 00:29:03.913 Timestamp: Not Supported 00:29:03.913 Copy: Supported 00:29:03.913 Volatile Write Cache: Present 00:29:03.913 Atomic Write Unit (Normal): 1 00:29:03.913 Atomic Write Unit (PFail): 1 00:29:03.913 Atomic Compare & Write Unit: 1 00:29:03.913 Fused Compare & Write: Supported 00:29:03.913 Scatter-Gather List 00:29:03.913 SGL Command Set: Supported 00:29:03.913 SGL Keyed: Supported 00:29:03.913 SGL Bit Bucket Descriptor: Not Supported 00:29:03.913 SGL Metadata Pointer: Not Supported 00:29:03.913 Oversized SGL: Not Supported 00:29:03.913 SGL Metadata Address: Not Supported 00:29:03.913 SGL Offset: Supported 00:29:03.913 Transport SGL Data Block: Not Supported 00:29:03.913 Replay Protected Memory Block: Not Supported 00:29:03.913 00:29:03.913 Firmware Slot Information 00:29:03.913 ========================= 00:29:03.913 Active slot: 1 00:29:03.913 Slot 1 Firmware Revision: 24.09 00:29:03.913 00:29:03.913 00:29:03.913 Commands Supported and Effects 00:29:03.913 ============================== 00:29:03.913 Admin Commands 00:29:03.913 -------------- 00:29:03.913 Get Log Page (02h): Supported 00:29:03.913 Identify (06h): Supported 00:29:03.913 Abort (08h): Supported 00:29:03.913 Set Features (09h): Supported 00:29:03.913 Get Features (0Ah): Supported 00:29:03.913 Asynchronous Event Request (0Ch): Supported 00:29:03.913 Keep Alive (18h): Supported 00:29:03.913 I/O Commands 00:29:03.913 ------------ 00:29:03.913 Flush (00h): Supported LBA-Change 00:29:03.913 Write (01h): Supported LBA-Change 00:29:03.913 Read (02h): Supported 00:29:03.913 Compare (05h): Supported 00:29:03.913 Write Zeroes (08h): Supported LBA-Change 00:29:03.913 Dataset Management (09h): Supported LBA-Change 00:29:03.913 Copy (19h): Supported LBA-Change 00:29:03.913 00:29:03.913 Error Log 00:29:03.913 ========= 00:29:03.913 00:29:03.913 Arbitration 00:29:03.913 =========== 00:29:03.913 Arbitration Burst: 1 00:29:03.913 00:29:03.913 Power Management 00:29:03.913 ================ 00:29:03.913 Number of Power States: 1 00:29:03.913 Current Power State: Power State #0 00:29:03.913 Power State #0: 00:29:03.913 Max Power: 0.00 W 00:29:03.913 Non-Operational State: Operational 00:29:03.913 Entry Latency: Not Reported 00:29:03.913 Exit Latency: Not Reported 00:29:03.913 Relative Read Throughput: 0 00:29:03.913 Relative Read Latency: 0 00:29:03.913 Relative Write Throughput: 0 00:29:03.914 Relative Write Latency: 0 00:29:03.914 Idle Power: Not Reported 00:29:03.914 Active Power: Not Reported 00:29:03.914 Non-Operational Permissive Mode: Not Supported 00:29:03.914 00:29:03.914 Health Information 00:29:03.914 ================== 00:29:03.914 Critical Warnings: 00:29:03.914 Available Spare Space: OK 00:29:03.914 Temperature: OK 00:29:03.914 Device Reliability: OK 00:29:03.914 Read Only: No 00:29:03.914 Volatile Memory Backup: OK 00:29:03.914 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:03.914 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:03.914 Available Spare: 0% 00:29:03.914 Available Spare Threshold: 0% 00:29:03.914 Life Percentage Used:[2024-07-26 06:21:15.010171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.914 [2024-07-26 06:21:15.010192] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:29:03.914 [2024-07-26 06:21:15.010213] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.914 [2024-07-26 06:21:15.010264] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:29:03.914 [2024-07-26 06:21:15.010497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.914 [2024-07-26 06:21:15.010520] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.914 [2024-07-26 06:21:15.010533] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.914 [2024-07-26 06:21:15.010545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:29:03.914 [2024-07-26 06:21:15.010654] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:29:03.914 [2024-07-26 06:21:15.010687] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:03.914 [2024-07-26 06:21:15.010716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.914 [2024-07-26 06:21:15.010732] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:29:03.914 [2024-07-26 06:21:15.010746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.914 [2024-07-26 06:21:15.010759] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:29:03.914 [2024-07-26 06:21:15.010773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.914 [2024-07-26 06:21:15.010786] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:03.914 [2024-07-26 06:21:15.010799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.914 [2024-07-26 06:21:15.010822] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.914 [2024-07-26 06:21:15.010836] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.914 [2024-07-26 06:21:15.010849] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:03.914 [2024-07-26 06:21:15.010869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.914 [2024-07-26 06:21:15.010905] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:03.914 [2024-07-26 06:21:15.015077] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.914 [2024-07-26 06:21:15.015107] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.914 [2024-07-26 06:21:15.015121] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.914 [2024-07-26 06:21:15.015134] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:03.914 [2024-07-26 06:21:15.015163] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.914 [2024-07-26 06:21:15.015178] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.914 [2024-07-26 06:21:15.015190] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:03.914 [2024-07-26 06:21:15.015210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.914 [2024-07-26 06:21:15.015268] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:03.914 [2024-07-26 06:21:15.015466] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.914 [2024-07-26 06:21:15.015486] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.914 [2024-07-26 06:21:15.015498] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.914 [2024-07-26 06:21:15.015509] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:03.914 [2024-07-26 06:21:15.015525] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:29:03.914 [2024-07-26 06:21:15.015540] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:29:03.914 [2024-07-26 06:21:15.015567] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.914 [2024-07-26 06:21:15.015605] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.914 [2024-07-26 06:21:15.015617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:03.914 [2024-07-26 06:21:15.015637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.914 [2024-07-26 06:21:15.015668] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:03.914 [2024-07-26 06:21:15.015839] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.914 [2024-07-26 06:21:15.015861] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.914 [2024-07-26 06:21:15.015873] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.914 [2024-07-26 06:21:15.015884] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:03.914 [2024-07-26 06:21:15.015913] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.914 [2024-07-26 06:21:15.015929] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.914 [2024-07-26 06:21:15.015941] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:03.914 [2024-07-26 06:21:15.015959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.914 [2024-07-26 06:21:15.016005] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:03.914 [2024-07-26 06:21:15.016206] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.914 [2024-07-26 06:21:15.016227] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.914 [2024-07-26 06:21:15.016239] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.914 [2024-07-26 06:21:15.016250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:03.914 [2024-07-26 06:21:15.016277] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.914 [2024-07-26 06:21:15.016293] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.914 [2024-07-26 06:21:15.016304] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:03.914 [2024-07-26 06:21:15.016327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.914 [2024-07-26 06:21:15.016378] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:03.914 [2024-07-26 06:21:15.016579] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.914 [2024-07-26 06:21:15.016602] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.914 [2024-07-26 06:21:15.016613] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.914 [2024-07-26 06:21:15.016625] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:03.914 [2024-07-26 06:21:15.016651] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.914 [2024-07-26 06:21:15.016667] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.914 [2024-07-26 06:21:15.016678] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:03.914 [2024-07-26 06:21:15.016697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.914 [2024-07-26 06:21:15.016743] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:03.914 [2024-07-26 06:21:15.016960] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.914 [2024-07-26 06:21:15.016982] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.914 [2024-07-26 06:21:15.016993] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.914 [2024-07-26 06:21:15.017004] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:03.914 [2024-07-26 06:21:15.017031] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.914 [2024-07-26 06:21:15.017047] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.914 [2024-07-26 06:21:15.017067] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:03.914 [2024-07-26 06:21:15.017088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.914 [2024-07-26 06:21:15.017120] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:03.914 [2024-07-26 06:21:15.017283] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.914 [2024-07-26 06:21:15.017303] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.914 [2024-07-26 06:21:15.017314] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.914 [2024-07-26 06:21:15.017326] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:03.914 [2024-07-26 06:21:15.017353] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.914 [2024-07-26 06:21:15.017368] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.914 [2024-07-26 06:21:15.017380] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:03.914 [2024-07-26 06:21:15.017398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.915 [2024-07-26 06:21:15.017429] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:03.915 [2024-07-26 06:21:15.017660] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.915 [2024-07-26 06:21:15.017681] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.915 [2024-07-26 06:21:15.017692] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.915 [2024-07-26 06:21:15.017703] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:03.915 [2024-07-26 06:21:15.017731] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.915 [2024-07-26 06:21:15.017746] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.915 [2024-07-26 06:21:15.017757] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:03.915 [2024-07-26 06:21:15.017776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.915 [2024-07-26 06:21:15.017829] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:03.915 [2024-07-26 06:21:15.018031] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.915 [2024-07-26 06:21:15.018052] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.915 [2024-07-26 06:21:15.018072] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.915 [2024-07-26 06:21:15.018085] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:03.915 [2024-07-26 06:21:15.018112] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.915 [2024-07-26 06:21:15.018128] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.915 [2024-07-26 06:21:15.018139] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:03.915 [2024-07-26 06:21:15.018157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.915 [2024-07-26 06:21:15.018188] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:03.915 [2024-07-26 06:21:15.018363] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.915 [2024-07-26 06:21:15.018385] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.915 [2024-07-26 06:21:15.018396] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.915 [2024-07-26 06:21:15.018408] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:03.915 [2024-07-26 06:21:15.018434] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.915 [2024-07-26 06:21:15.018450] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.915 [2024-07-26 06:21:15.018461] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:03.915 [2024-07-26 06:21:15.018480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.915 [2024-07-26 06:21:15.018527] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:03.915 [2024-07-26 06:21:15.018735] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.915 [2024-07-26 06:21:15.018755] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.915 [2024-07-26 06:21:15.018767] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.915 [2024-07-26 06:21:15.018778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:03.915 [2024-07-26 06:21:15.018810] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.915 [2024-07-26 06:21:15.018827] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.915 [2024-07-26 06:21:15.018839] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:03.915 [2024-07-26 06:21:15.018858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.915 [2024-07-26 06:21:15.018904] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:03.915 [2024-07-26 06:21:15.023081] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.915 [2024-07-26 06:21:15.023105] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.915 [2024-07-26 06:21:15.023117] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.915 [2024-07-26 06:21:15.023128] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:03.915 [2024-07-26 06:21:15.023171] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:03.915 [2024-07-26 06:21:15.023188] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:03.915 [2024-07-26 06:21:15.023200] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:03.915 [2024-07-26 06:21:15.023226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.915 [2024-07-26 06:21:15.023264] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:03.915 [2024-07-26 06:21:15.023431] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:03.915 [2024-07-26 06:21:15.023451] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:03.915 [2024-07-26 06:21:15.023463] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:03.915 [2024-07-26 06:21:15.023474] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:03.915 [2024-07-26 06:21:15.023511] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:29:03.915 0% 00:29:03.915 Data Units Read: 0 00:29:03.915 Data Units Written: 0 00:29:03.915 Host Read Commands: 0 00:29:03.915 Host Write Commands: 0 00:29:03.915 Controller Busy Time: 0 minutes 00:29:03.915 Power Cycles: 0 00:29:03.915 Power On Hours: 0 hours 00:29:03.915 Unsafe Shutdowns: 0 00:29:03.915 Unrecoverable Media Errors: 0 00:29:03.915 Lifetime Error Log Entries: 0 00:29:03.915 Warning Temperature Time: 0 minutes 00:29:03.915 Critical Temperature Time: 0 minutes 00:29:03.915 00:29:03.915 Number of Queues 00:29:03.915 ================ 00:29:03.915 Number of I/O Submission Queues: 127 00:29:03.915 Number of I/O Completion Queues: 127 00:29:03.915 00:29:03.915 Active Namespaces 00:29:03.915 ================= 00:29:03.915 Namespace ID:1 00:29:03.915 Error Recovery Timeout: Unlimited 00:29:03.915 Command Set Identifier: NVM (00h) 00:29:03.915 Deallocate: Supported 00:29:03.915 Deallocated/Unwritten Error: Not Supported 00:29:03.915 Deallocated Read Value: Unknown 00:29:03.915 Deallocate in Write Zeroes: Not Supported 00:29:03.915 Deallocated Guard Field: 0xFFFF 00:29:03.915 Flush: Supported 00:29:03.915 Reservation: Supported 00:29:03.915 Namespace Sharing Capabilities: Multiple Controllers 00:29:03.915 Size (in LBAs): 131072 (0GiB) 00:29:03.915 Capacity (in LBAs): 131072 (0GiB) 00:29:03.915 Utilization (in LBAs): 131072 (0GiB) 00:29:03.915 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:03.915 EUI64: ABCDEF0123456789 00:29:03.915 UUID: 9c8da8ff-062d-4081-9209-e60565b325d8 00:29:03.915 Thin Provisioning: Not Supported 00:29:03.915 Per-NS Atomic Units: Yes 00:29:03.915 Atomic Boundary Size (Normal): 0 00:29:03.915 Atomic Boundary Size (PFail): 0 00:29:03.915 Atomic Boundary Offset: 0 00:29:03.915 Maximum Single Source Range Length: 65535 00:29:03.915 Maximum Copy Length: 65535 00:29:03.915 Maximum Source Range Count: 1 00:29:03.915 NGUID/EUI64 Never Reused: No 00:29:03.915 Namespace Write Protected: No 00:29:03.915 Number of LBA Formats: 1 00:29:03.915 Current LBA Format: LBA Format #00 00:29:03.915 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:03.915 00:29:03.915 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:03.915 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:03.915 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.915 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:03.915 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.915 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:03.915 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:03.915 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:03.916 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:29:03.916 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:03.916 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:29:03.916 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:03.916 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:03.916 rmmod nvme_tcp 00:29:03.916 rmmod nvme_fabrics 00:29:03.916 rmmod nvme_keyring 00:29:03.916 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:03.916 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:29:03.916 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:29:03.916 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 244370 ']' 00:29:03.916 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 244370 00:29:03.916 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 244370 ']' 00:29:03.916 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 244370 00:29:03.916 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:29:03.916 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:03.916 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 244370 00:29:03.916 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:03.916 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:03.916 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 244370' 00:29:03.916 killing process with pid 244370 00:29:03.916 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 244370 00:29:03.916 06:21:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 244370 00:29:05.289 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:29:05.289 06:21:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:05.289 06:21:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:05.289 06:21:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:05.289 06:21:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:05.289 06:21:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:05.289 06:21:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.289 06:21:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.289 06:21:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:07.830 00:29:07.830 real 0m7.359s 00:29:07.830 user 0m10.445s 00:29:07.830 sys 0m2.004s 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:07.830 ************************************ 00:29:07.830 END TEST nvmf_identify 00:29:07.830 ************************************ 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.830 ************************************ 00:29:07.830 START TEST nvmf_perf 00:29:07.830 ************************************ 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:07.830 * Looking for test storage... 00:29:07.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:07.830 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:07.831 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.831 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.831 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.831 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:07.831 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:07.831 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:07.831 06:21:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:09.734 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:09.734 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:09.734 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:09.734 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:09.734 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:09.734 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:09.734 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:09.734 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:29:09.734 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:09.734 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:29:09.734 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:29:09.734 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:29:09.734 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:29:09.734 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:29:09.734 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:09.734 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:09.734 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:09.734 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:09.734 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:09.734 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:09.734 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:09.734 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:09.734 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:09.734 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:09.734 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:09.735 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:09.735 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:09.735 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:09.735 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:09.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:09.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:29:09.735 00:29:09.735 --- 10.0.0.2 ping statistics --- 00:29:09.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.735 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:09.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:09.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:29:09.735 00:29:09.735 --- 10.0.0.1 ping statistics --- 00:29:09.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.735 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=246593 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 246593 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 246593 ']' 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:09.735 06:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:09.735 [2024-07-26 06:21:20.870078] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:09.735 [2024-07-26 06:21:20.870248] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.735 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.735 [2024-07-26 06:21:21.007796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:09.994 [2024-07-26 06:21:21.267407] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:09.994 [2024-07-26 06:21:21.267489] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:09.994 [2024-07-26 06:21:21.267533] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:09.994 [2024-07-26 06:21:21.267565] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:09.994 [2024-07-26 06:21:21.267600] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:09.994 [2024-07-26 06:21:21.267743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.994 [2024-07-26 06:21:21.267830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:09.994 [2024-07-26 06:21:21.267927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.994 [2024-07-26 06:21:21.267932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:10.560 06:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:10.560 06:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:29:10.560 06:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:10.560 06:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:10.560 06:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:10.560 06:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:10.560 06:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:10.560 06:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:13.862 06:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:13.862 06:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:13.862 06:21:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:29:13.862 06:21:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:14.428 06:21:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:14.428 06:21:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:29:14.428 06:21:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:14.428 06:21:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:14.428 06:21:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:14.428 [2024-07-26 06:21:25.747364] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.687 06:21:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:14.687 06:21:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:14.687 06:21:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:14.944 06:21:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:14.944 06:21:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:15.202 06:21:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:15.460 [2024-07-26 06:21:26.748435] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.460 06:21:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:15.717 06:21:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:29:15.717 06:21:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:15.717 06:21:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:15.717 06:21:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:17.088 Initializing NVMe Controllers 00:29:17.088 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:29:17.088 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:29:17.088 Initialization complete. Launching workers. 00:29:17.088 ======================================================== 00:29:17.088 Latency(us) 00:29:17.088 Device Information : IOPS MiB/s Average min max 00:29:17.088 PCIE (0000:88:00.0) NSID 1 from core 0: 75335.90 294.28 424.04 42.24 6317.28 00:29:17.088 ======================================================== 00:29:17.088 Total : 75335.90 294.28 424.04 42.24 6317.28 00:29:17.088 00:29:17.346 06:21:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:17.346 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.719 Initializing NVMe Controllers 00:29:18.719 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:18.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:18.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:18.719 Initialization complete. Launching workers. 00:29:18.719 ======================================================== 00:29:18.719 Latency(us) 00:29:18.720 Device Information : IOPS MiB/s Average min max 00:29:18.720 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 104.00 0.41 9621.47 225.06 44746.88 00:29:18.720 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 50.00 0.20 20746.93 7919.45 47901.37 00:29:18.720 ======================================================== 00:29:18.720 Total : 154.00 0.60 13233.63 225.06 47901.37 00:29:18.720 00:29:18.720 06:21:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:18.720 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.093 Initializing NVMe Controllers 00:29:20.093 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:20.093 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:20.093 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:20.093 Initialization complete. Launching workers. 00:29:20.093 ======================================================== 00:29:20.093 Latency(us) 00:29:20.093 Device Information : IOPS MiB/s Average min max 00:29:20.093 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5414.99 21.15 5930.00 972.57 13190.56 00:29:20.093 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3799.99 14.84 8465.18 5749.39 17162.27 00:29:20.093 ======================================================== 00:29:20.093 Total : 9214.98 36.00 6975.43 972.57 17162.27 00:29:20.093 00:29:20.093 06:21:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:20.093 06:21:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:20.093 06:21:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:20.093 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.373 Initializing NVMe Controllers 00:29:23.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:23.373 Controller IO queue size 128, less than required. 00:29:23.373 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:23.373 Controller IO queue size 128, less than required. 00:29:23.373 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:23.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:23.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:23.373 Initialization complete. Launching workers. 00:29:23.373 ======================================================== 00:29:23.373 Latency(us) 00:29:23.374 Device Information : IOPS MiB/s Average min max 00:29:23.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1182.65 295.66 113243.74 69696.64 295976.76 00:29:23.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 549.37 137.34 247578.87 119869.58 494662.90 00:29:23.374 ======================================================== 00:29:23.374 Total : 1732.02 433.00 155852.92 69696.64 494662.90 00:29:23.374 00:29:23.374 06:21:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:23.374 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.374 No valid NVMe controllers or AIO or URING devices found 00:29:23.374 Initializing NVMe Controllers 00:29:23.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:23.374 Controller IO queue size 128, less than required. 00:29:23.374 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:23.374 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:23.374 Controller IO queue size 128, less than required. 00:29:23.374 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:23.374 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:23.374 WARNING: Some requested NVMe devices were skipped 00:29:23.374 06:21:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:23.374 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.904 Initializing NVMe Controllers 00:29:25.904 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:25.904 Controller IO queue size 128, less than required. 00:29:25.904 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.904 Controller IO queue size 128, less than required. 00:29:25.904 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:25.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:25.904 Initialization complete. Launching workers. 00:29:25.904 00:29:25.904 ==================== 00:29:25.904 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:25.904 TCP transport: 00:29:25.904 polls: 5758 00:29:25.904 idle_polls: 1794 00:29:25.904 sock_completions: 3964 00:29:25.904 nvme_completions: 4709 00:29:25.904 submitted_requests: 7060 00:29:25.904 queued_requests: 1 00:29:25.904 00:29:25.904 ==================== 00:29:25.904 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:25.904 TCP transport: 00:29:25.904 polls: 10011 00:29:25.904 idle_polls: 5777 00:29:25.904 sock_completions: 4234 00:29:25.904 nvme_completions: 4569 00:29:25.904 submitted_requests: 6828 00:29:25.904 queued_requests: 1 00:29:25.904 ======================================================== 00:29:25.904 Latency(us) 00:29:25.904 Device Information : IOPS MiB/s Average min max 00:29:25.904 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1175.24 293.81 114886.34 71602.14 318300.79 00:29:25.904 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1140.29 285.07 118487.21 71635.33 431823.44 00:29:25.904 ======================================================== 00:29:25.904 Total : 2315.53 578.88 116659.60 71602.14 431823.44 00:29:25.904 00:29:26.162 06:21:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:26.162 06:21:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:26.420 06:21:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:26.420 06:21:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:29:26.420 06:21:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:29.696 06:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=55e68b1e-b070-445e-ae49-f7728bf73d59 00:29:29.696 06:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 55e68b1e-b070-445e-ae49-f7728bf73d59 00:29:29.696 06:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=55e68b1e-b070-445e-ae49-f7728bf73d59 00:29:29.696 06:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:29.696 06:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:29:29.696 06:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:29:29.696 06:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:29.952 06:21:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:29.952 { 00:29:29.953 "uuid": "55e68b1e-b070-445e-ae49-f7728bf73d59", 00:29:29.953 "name": "lvs_0", 00:29:29.953 "base_bdev": "Nvme0n1", 00:29:29.953 "total_data_clusters": 238234, 00:29:29.953 "free_clusters": 238234, 00:29:29.953 "block_size": 512, 00:29:29.953 "cluster_size": 4194304 00:29:29.953 } 00:29:29.953 ]' 00:29:29.953 06:21:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="55e68b1e-b070-445e-ae49-f7728bf73d59") .free_clusters' 00:29:29.953 06:21:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:29:29.953 06:21:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="55e68b1e-b070-445e-ae49-f7728bf73d59") .cluster_size' 00:29:29.953 06:21:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:29.953 06:21:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:29:29.953 06:21:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:29:29.953 952936 00:29:29.953 06:21:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:29:29.953 06:21:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:29:29.953 06:21:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 55e68b1e-b070-445e-ae49-f7728bf73d59 lbd_0 20480 00:29:30.883 06:21:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=8699b2dc-0dac-4d13-8906-58cbb0c55b5f 00:29:30.883 06:21:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 8699b2dc-0dac-4d13-8906-58cbb0c55b5f lvs_n_0 00:29:31.447 06:21:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=1eac9545-9070-417a-b2e0-ba61fc008369 00:29:31.447 06:21:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 1eac9545-9070-417a-b2e0-ba61fc008369 00:29:31.447 06:21:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=1eac9545-9070-417a-b2e0-ba61fc008369 00:29:31.447 06:21:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:31.447 06:21:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:29:31.447 06:21:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:29:31.447 06:21:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:31.705 06:21:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:31.705 { 00:29:31.705 "uuid": "55e68b1e-b070-445e-ae49-f7728bf73d59", 00:29:31.705 "name": "lvs_0", 00:29:31.705 "base_bdev": "Nvme0n1", 00:29:31.705 "total_data_clusters": 238234, 00:29:31.705 "free_clusters": 233114, 00:29:31.705 "block_size": 512, 00:29:31.705 "cluster_size": 4194304 00:29:31.705 }, 00:29:31.705 { 00:29:31.705 "uuid": "1eac9545-9070-417a-b2e0-ba61fc008369", 00:29:31.705 "name": "lvs_n_0", 00:29:31.705 "base_bdev": "8699b2dc-0dac-4d13-8906-58cbb0c55b5f", 00:29:31.705 "total_data_clusters": 5114, 00:29:31.705 "free_clusters": 5114, 00:29:31.705 "block_size": 512, 00:29:31.705 "cluster_size": 4194304 00:29:31.705 } 00:29:31.705 ]' 00:29:31.705 06:21:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="1eac9545-9070-417a-b2e0-ba61fc008369") .free_clusters' 00:29:31.705 06:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:29:31.705 06:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="1eac9545-9070-417a-b2e0-ba61fc008369") .cluster_size' 00:29:31.971 06:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:31.971 06:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:29:31.971 06:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:29:31.971 20456 00:29:31.971 06:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:31.971 06:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1eac9545-9070-417a-b2e0-ba61fc008369 lbd_nest_0 20456 00:29:32.234 06:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=406d9e6f-b110-4820-8a71-7d2531413692 00:29:32.234 06:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:32.234 06:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:32.234 06:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 406d9e6f-b110-4820-8a71-7d2531413692 00:29:32.799 06:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:32.799 06:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:32.799 06:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:32.799 06:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:32.799 06:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:32.799 06:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:33.057 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.243 Initializing NVMe Controllers 00:29:45.243 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:45.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:45.243 Initialization complete. Launching workers. 00:29:45.243 ======================================================== 00:29:45.243 Latency(us) 00:29:45.243 Device Information : IOPS MiB/s Average min max 00:29:45.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 49.19 0.02 20368.20 265.74 47706.24 00:29:45.243 ======================================================== 00:29:45.243 Total : 49.19 0.02 20368.20 265.74 47706.24 00:29:45.243 00:29:45.243 06:21:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:45.243 06:21:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:45.243 EAL: No free 2048 kB hugepages reported on node 1 00:29:55.215 Initializing NVMe Controllers 00:29:55.215 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:55.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:55.215 Initialization complete. Launching workers. 00:29:55.215 ======================================================== 00:29:55.215 Latency(us) 00:29:55.215 Device Information : IOPS MiB/s Average min max 00:29:55.215 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.90 10.11 12366.68 3995.88 47896.59 00:29:55.215 ======================================================== 00:29:55.215 Total : 80.90 10.11 12366.68 3995.88 47896.59 00:29:55.215 00:29:55.215 06:22:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:55.215 06:22:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:55.215 06:22:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:55.215 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.182 Initializing NVMe Controllers 00:30:05.182 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:05.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:05.182 Initialization complete. Launching workers. 00:30:05.182 ======================================================== 00:30:05.182 Latency(us) 00:30:05.182 Device Information : IOPS MiB/s Average min max 00:30:05.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4785.40 2.34 6688.05 400.62 14205.88 00:30:05.182 ======================================================== 00:30:05.182 Total : 4785.40 2.34 6688.05 400.62 14205.88 00:30:05.182 00:30:05.182 06:22:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:05.182 06:22:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:05.182 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.182 Initializing NVMe Controllers 00:30:15.182 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:15.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:15.182 Initialization complete. Launching workers. 00:30:15.182 ======================================================== 00:30:15.182 Latency(us) 00:30:15.182 Device Information : IOPS MiB/s Average min max 00:30:15.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2411.27 301.41 13276.57 811.25 29627.67 00:30:15.182 ======================================================== 00:30:15.182 Total : 2411.27 301.41 13276.57 811.25 29627.67 00:30:15.182 00:30:15.182 06:22:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:15.182 06:22:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:15.182 06:22:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:15.182 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.384 Initializing NVMe Controllers 00:30:27.384 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:27.384 Controller IO queue size 128, less than required. 00:30:27.384 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:27.384 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:27.384 Initialization complete. Launching workers. 00:30:27.384 ======================================================== 00:30:27.384 Latency(us) 00:30:27.384 Device Information : IOPS MiB/s Average min max 00:30:27.384 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8643.46 4.22 14823.68 1846.03 37476.70 00:30:27.384 ======================================================== 00:30:27.384 Total : 8643.46 4.22 14823.68 1846.03 37476.70 00:30:27.384 00:30:27.384 06:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:27.384 06:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:27.384 EAL: No free 2048 kB hugepages reported on node 1 00:30:37.357 Initializing NVMe Controllers 00:30:37.357 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:37.357 Controller IO queue size 128, less than required. 00:30:37.357 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:37.357 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:37.357 Initialization complete. Launching workers. 00:30:37.357 ======================================================== 00:30:37.357 Latency(us) 00:30:37.357 Device Information : IOPS MiB/s Average min max 00:30:37.357 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1191.80 148.97 107893.46 16065.69 251343.56 00:30:37.357 ======================================================== 00:30:37.357 Total : 1191.80 148.97 107893.46 16065.69 251343.56 00:30:37.357 00:30:37.357 06:22:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:37.357 06:22:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 406d9e6f-b110-4820-8a71-7d2531413692 00:30:37.357 06:22:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:37.357 06:22:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8699b2dc-0dac-4d13-8906-58cbb0c55b5f 00:30:37.615 06:22:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:37.874 06:22:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:37.874 06:22:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:37.874 06:22:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:37.874 06:22:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:30:37.874 06:22:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:37.874 06:22:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:30:37.874 06:22:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:37.874 06:22:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:37.874 rmmod nvme_tcp 00:30:37.874 rmmod nvme_fabrics 00:30:37.874 rmmod nvme_keyring 00:30:37.874 06:22:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:37.874 06:22:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:30:37.874 06:22:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:30:37.874 06:22:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 246593 ']' 00:30:37.874 06:22:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 246593 00:30:37.874 06:22:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 246593 ']' 00:30:37.874 06:22:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 246593 00:30:37.874 06:22:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:30:37.874 06:22:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:37.874 06:22:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 246593 00:30:37.874 06:22:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:37.874 06:22:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:37.874 06:22:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 246593' 00:30:37.874 killing process with pid 246593 00:30:37.874 06:22:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 246593 00:30:37.874 06:22:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 246593 00:30:40.408 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:30:40.408 06:22:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:40.408 06:22:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:40.408 06:22:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:40.408 06:22:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:40.408 06:22:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:40.408 06:22:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.408 06:22:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:40.408 06:22:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.947 06:22:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:42.947 00:30:42.947 real 1m35.055s 00:30:42.947 user 5m49.926s 00:30:42.947 sys 0m16.033s 00:30:42.947 06:22:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:42.947 06:22:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:42.947 ************************************ 00:30:42.947 END TEST nvmf_perf 00:30:42.947 ************************************ 00:30:42.947 06:22:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:42.947 06:22:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:42.947 06:22:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:42.947 06:22:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.947 ************************************ 00:30:42.947 START TEST nvmf_fio_host 00:30:42.947 ************************************ 00:30:42.947 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:42.947 * Looking for test storage... 00:30:42.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:42.947 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:42.947 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:42.947 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.947 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.947 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.947 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.947 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:30:42.948 06:22:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:44.329 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:44.329 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:44.329 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:44.329 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:44.329 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:44.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:44.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:30:44.617 00:30:44.617 --- 10.0.0.2 ping statistics --- 00:30:44.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.617 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:44.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:44.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:30:44.617 00:30:44.617 --- 10.0.0.1 ping statistics --- 00:30:44.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.617 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=259089 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 259089 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 259089 ']' 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:44.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:44.617 06:22:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.617 [2024-07-26 06:22:55.839491] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:44.617 [2024-07-26 06:22:55.839633] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:44.617 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.875 [2024-07-26 06:22:55.971974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:44.875 [2024-07-26 06:22:56.198077] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:44.875 [2024-07-26 06:22:56.198153] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:44.875 [2024-07-26 06:22:56.198192] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:44.875 [2024-07-26 06:22:56.198210] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:44.875 [2024-07-26 06:22:56.198229] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:44.875 [2024-07-26 06:22:56.198336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.875 [2024-07-26 06:22:56.198402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:44.875 [2024-07-26 06:22:56.198444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.875 [2024-07-26 06:22:56.198455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:45.810 06:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:45.810 06:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:30:45.810 06:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:45.810 [2024-07-26 06:22:57.028145] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:45.810 06:22:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:45.810 06:22:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:45.810 06:22:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.810 06:22:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:46.376 Malloc1 00:30:46.376 06:22:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:46.376 06:22:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:46.634 06:22:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:46.891 [2024-07-26 06:22:58.171894] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:46.891 06:22:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:47.149 06:22:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:47.149 06:22:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:47.149 06:22:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:47.149 06:22:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:47.149 06:22:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:47.149 06:22:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:47.149 06:22:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:47.149 06:22:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:47.149 06:22:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:47.149 06:22:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:47.149 06:22:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:47.149 06:22:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:47.149 06:22:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:47.149 06:22:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:47.149 06:22:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:47.149 06:22:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:30:47.149 06:22:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:47.149 06:22:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:47.407 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:47.407 fio-3.35 00:30:47.407 Starting 1 thread 00:30:47.666 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.195 [2024-07-26 06:23:01.229517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:30:50.196 [2024-07-26 06:23:01.229603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:30:50.196 [2024-07-26 06:23:01.229626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:30:50.196 [2024-07-26 06:23:01.229645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:30:50.196 [2024-07-26 06:23:01.229663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:30:50.196 00:30:50.196 test: (groupid=0, jobs=1): err= 0: pid=259572: Fri Jul 26 06:23:01 2024 00:30:50.196 read: IOPS=6376, BW=24.9MiB/s (26.1MB/s)(50.0MiB/2008msec) 00:30:50.196 slat (usec): min=2, max=162, avg= 3.56, stdev= 2.41 00:30:50.196 clat (usec): min=3761, max=18609, avg=11000.82, stdev=940.88 00:30:50.196 lat (usec): min=3791, max=18613, avg=11004.38, stdev=940.75 00:30:50.196 clat percentiles (usec): 00:30:50.196 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:30:50.196 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:30:50.196 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12387], 00:30:50.196 | 99.00th=[13042], 99.50th=[13304], 99.90th=[17957], 99.95th=[18482], 00:30:50.196 | 99.99th=[18482] 00:30:50.196 bw ( KiB/s): min=24424, max=25968, per=99.90%, avg=25480.00, stdev=710.61, samples=4 00:30:50.196 iops : min= 6106, max= 6492, avg=6370.00, stdev=177.65, samples=4 00:30:50.196 write: IOPS=6379, BW=24.9MiB/s (26.1MB/s)(50.0MiB/2008msec); 0 zone resets 00:30:50.196 slat (usec): min=2, max=166, avg= 3.73, stdev= 2.07 00:30:50.196 clat (usec): min=1825, max=17659, avg=8954.60, stdev=803.09 00:30:50.196 lat (usec): min=1843, max=17663, avg=8958.33, stdev=803.01 00:30:50.196 clat percentiles (usec): 00:30:50.196 | 1.00th=[ 7177], 5.00th=[ 7832], 10.00th=[ 8094], 20.00th=[ 8356], 00:30:50.196 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9110], 00:30:50.196 | 70.00th=[ 9372], 80.00th=[ 9503], 90.00th=[ 9896], 95.00th=[10159], 00:30:50.196 | 99.00th=[10683], 99.50th=[11207], 99.90th=[15401], 99.95th=[16188], 00:30:50.196 | 99.99th=[16581] 00:30:50.196 bw ( KiB/s): min=25296, max=25728, per=99.91%, avg=25494.00, stdev=190.37, samples=4 00:30:50.196 iops : min= 6324, max= 6432, avg=6373.50, stdev=47.59, samples=4 00:30:50.196 lat (msec) : 2=0.01%, 4=0.07%, 10=53.34%, 20=46.57% 00:30:50.196 cpu : usr=65.42%, sys=31.34%, ctx=54, majf=0, minf=1536 00:30:50.196 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:50.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.196 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:50.196 issued rwts: total=12804,12810,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.196 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:50.196 00:30:50.196 Run status group 0 (all jobs): 00:30:50.196 READ: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=50.0MiB (52.4MB), run=2008-2008msec 00:30:50.196 WRITE: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=50.0MiB (52.5MB), run=2008-2008msec 00:30:50.196 ----------------------------------------------------- 00:30:50.196 Suppressions used: 00:30:50.196 count bytes template 00:30:50.196 1 57 /usr/src/fio/parse.c 00:30:50.196 1 8 libtcmalloc_minimal.so 00:30:50.196 ----------------------------------------------------- 00:30:50.196 00:30:50.196 06:23:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:50.196 06:23:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:50.196 06:23:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:50.196 06:23:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:50.196 06:23:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:50.196 06:23:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:50.196 06:23:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:50.196 06:23:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:50.196 06:23:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:50.196 06:23:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:50.196 06:23:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:50.196 06:23:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:50.196 06:23:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:50.196 06:23:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:50.196 06:23:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:30:50.196 06:23:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:50.196 06:23:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:50.454 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:50.454 fio-3.35 00:30:50.454 Starting 1 thread 00:30:50.712 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.240 00:30:53.240 test: (groupid=0, jobs=1): err= 0: pid=260015: Fri Jul 26 06:23:04 2024 00:30:53.240 read: IOPS=6314, BW=98.7MiB/s (103MB/s)(198MiB/2009msec) 00:30:53.240 slat (usec): min=3, max=142, avg= 4.87, stdev= 2.22 00:30:53.240 clat (usec): min=3516, max=24798, avg=11867.23, stdev=2445.84 00:30:53.240 lat (usec): min=3521, max=24803, avg=11872.10, stdev=2445.91 00:30:53.240 clat percentiles (usec): 00:30:53.240 | 1.00th=[ 6194], 5.00th=[ 7963], 10.00th=[ 8979], 20.00th=[10028], 00:30:53.240 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11600], 60.00th=[12125], 00:30:53.240 | 70.00th=[12780], 80.00th=[13698], 90.00th=[15270], 95.00th=[16188], 00:30:53.240 | 99.00th=[18220], 99.50th=[19268], 99.90th=[20579], 99.95th=[21103], 00:30:53.240 | 99.99th=[21627] 00:30:53.240 bw ( KiB/s): min=41824, max=57376, per=49.45%, avg=49960.00, stdev=8359.23, samples=4 00:30:53.240 iops : min= 2614, max= 3586, avg=3122.50, stdev=522.45, samples=4 00:30:53.240 write: IOPS=3673, BW=57.4MiB/s (60.2MB/s)(102MiB/1783msec); 0 zone resets 00:30:53.240 slat (usec): min=32, max=153, avg=35.67, stdev= 5.40 00:30:53.240 clat (usec): min=7451, max=29063, avg=15280.34, stdev=2603.65 00:30:53.240 lat (usec): min=7489, max=29116, avg=15316.02, stdev=2603.48 00:30:53.240 clat percentiles (usec): 00:30:53.240 | 1.00th=[ 9765], 5.00th=[11207], 10.00th=[12256], 20.00th=[13173], 00:30:53.240 | 30.00th=[13829], 40.00th=[14353], 50.00th=[15008], 60.00th=[15795], 00:30:53.240 | 70.00th=[16581], 80.00th=[17433], 90.00th=[18744], 95.00th=[19792], 00:30:53.240 | 99.00th=[21627], 99.50th=[22414], 99.90th=[28443], 99.95th=[28705], 00:30:53.240 | 99.99th=[28967] 00:30:53.240 bw ( KiB/s): min=43712, max=59392, per=88.43%, avg=51976.00, stdev=8453.75, samples=4 00:30:53.240 iops : min= 2732, max= 3712, avg=3248.50, stdev=528.36, samples=4 00:30:53.240 lat (msec) : 4=0.05%, 10=12.89%, 20=85.56%, 50=1.50% 00:30:53.240 cpu : usr=75.45%, sys=22.46%, ctx=43, majf=0, minf=2086 00:30:53.240 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:30:53.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:53.240 issued rwts: total=12686,6550,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.240 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:53.240 00:30:53.240 Run status group 0 (all jobs): 00:30:53.240 READ: bw=98.7MiB/s (103MB/s), 98.7MiB/s-98.7MiB/s (103MB/s-103MB/s), io=198MiB (208MB), run=2009-2009msec 00:30:53.240 WRITE: bw=57.4MiB/s (60.2MB/s), 57.4MiB/s-57.4MiB/s (60.2MB/s-60.2MB/s), io=102MiB (107MB), run=1783-1783msec 00:30:53.240 ----------------------------------------------------- 00:30:53.240 Suppressions used: 00:30:53.240 count bytes template 00:30:53.240 1 57 /usr/src/fio/parse.c 00:30:53.240 140 13440 /usr/src/fio/iolog.c 00:30:53.240 1 8 libtcmalloc_minimal.so 00:30:53.240 ----------------------------------------------------- 00:30:53.240 00:30:53.240 06:23:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:53.498 06:23:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:53.498 06:23:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:53.498 06:23:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:53.498 06:23:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:53.498 06:23:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:30:53.498 06:23:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:53.498 06:23:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:53.498 06:23:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:53.498 06:23:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:53.498 06:23:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:30:53.498 06:23:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:30:56.785 Nvme0n1 00:30:56.785 06:23:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:00.074 06:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=e78f48b0-34ce-4b3b-a544-423cf23321f9 00:31:00.074 06:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb e78f48b0-34ce-4b3b-a544-423cf23321f9 00:31:00.074 06:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=e78f48b0-34ce-4b3b-a544-423cf23321f9 00:31:00.074 06:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:00.074 06:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:00.074 06:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:00.074 06:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:00.074 06:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:00.074 { 00:31:00.074 "uuid": "e78f48b0-34ce-4b3b-a544-423cf23321f9", 00:31:00.074 "name": "lvs_0", 00:31:00.074 "base_bdev": "Nvme0n1", 00:31:00.074 "total_data_clusters": 930, 00:31:00.074 "free_clusters": 930, 00:31:00.074 "block_size": 512, 00:31:00.074 "cluster_size": 1073741824 00:31:00.074 } 00:31:00.074 ]' 00:31:00.074 06:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="e78f48b0-34ce-4b3b-a544-423cf23321f9") .free_clusters' 00:31:00.074 06:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:31:00.074 06:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="e78f48b0-34ce-4b3b-a544-423cf23321f9") .cluster_size' 00:31:00.074 06:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:31:00.074 06:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:31:00.074 06:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:31:00.074 952320 00:31:00.074 06:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:00.074 de964a02-8450-429a-8189-32342348a13c 00:31:00.333 06:23:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:00.333 06:23:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:00.899 06:23:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:00.899 06:23:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:00.899 06:23:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:00.899 06:23:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:00.899 06:23:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:00.899 06:23:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:00.899 06:23:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:00.899 06:23:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:00.899 06:23:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:00.899 06:23:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:00.899 06:23:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:00.899 06:23:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:00.899 06:23:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:00.899 06:23:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:00.899 06:23:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:00.899 06:23:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:31:00.899 06:23:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:00.899 06:23:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:01.157 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:01.157 fio-3.35 00:31:01.157 Starting 1 thread 00:31:01.416 EAL: No free 2048 kB hugepages reported on node 1 00:31:04.047 00:31:04.047 test: (groupid=0, jobs=1): err= 0: pid=261292: Fri Jul 26 06:23:14 2024 00:31:04.047 read: IOPS=4495, BW=17.6MiB/s (18.4MB/s)(35.3MiB/2011msec) 00:31:04.047 slat (usec): min=2, max=180, avg= 3.52, stdev= 2.73 00:31:04.047 clat (usec): min=1359, max=172774, avg=15533.96, stdev=13081.40 00:31:04.047 lat (usec): min=1363, max=172835, avg=15537.48, stdev=13081.83 00:31:04.047 clat percentiles (msec): 00:31:04.047 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 14], 00:31:04.047 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 15], 00:31:04.047 | 70.00th=[ 16], 80.00th=[ 16], 90.00th=[ 17], 95.00th=[ 17], 00:31:04.047 | 99.00th=[ 18], 99.50th=[ 157], 99.90th=[ 174], 99.95th=[ 174], 00:31:04.047 | 99.99th=[ 174] 00:31:04.047 bw ( KiB/s): min=12600, max=19920, per=99.82%, avg=17948.00, stdev=3567.86, samples=4 00:31:04.047 iops : min= 3150, max= 4980, avg=4487.00, stdev=891.96, samples=4 00:31:04.047 write: IOPS=4492, BW=17.5MiB/s (18.4MB/s)(35.3MiB/2011msec); 0 zone resets 00:31:04.047 slat (usec): min=3, max=151, avg= 3.76, stdev= 2.02 00:31:04.047 clat (usec): min=440, max=170362, avg=12645.76, stdev=12330.68 00:31:04.047 lat (usec): min=444, max=170370, avg=12649.52, stdev=12331.15 00:31:04.047 clat percentiles (msec): 00:31:04.047 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 11], 00:31:04.047 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:31:04.047 | 70.00th=[ 13], 80.00th=[ 13], 90.00th=[ 14], 95.00th=[ 14], 00:31:04.047 | 99.00th=[ 15], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:31:04.047 | 99.99th=[ 171] 00:31:04.047 bw ( KiB/s): min=13224, max=19776, per=99.87%, avg=17946.00, stdev=3154.93, samples=4 00:31:04.047 iops : min= 3306, max= 4944, avg=4486.50, stdev=788.73, samples=4 00:31:04.047 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:31:04.047 lat (msec) : 2=0.03%, 4=0.08%, 10=2.84%, 20=96.21%, 50=0.12% 00:31:04.047 lat (msec) : 250=0.71% 00:31:04.047 cpu : usr=65.32%, sys=31.79%, ctx=83, majf=0, minf=1534 00:31:04.047 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:31:04.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:04.047 issued rwts: total=9040,9034,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.047 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:04.047 00:31:04.047 Run status group 0 (all jobs): 00:31:04.047 READ: bw=17.6MiB/s (18.4MB/s), 17.6MiB/s-17.6MiB/s (18.4MB/s-18.4MB/s), io=35.3MiB (37.0MB), run=2011-2011msec 00:31:04.047 WRITE: bw=17.5MiB/s (18.4MB/s), 17.5MiB/s-17.5MiB/s (18.4MB/s-18.4MB/s), io=35.3MiB (37.0MB), run=2011-2011msec 00:31:04.047 ----------------------------------------------------- 00:31:04.047 Suppressions used: 00:31:04.047 count bytes template 00:31:04.047 1 58 /usr/src/fio/parse.c 00:31:04.047 1 8 libtcmalloc_minimal.so 00:31:04.047 ----------------------------------------------------- 00:31:04.047 00:31:04.047 06:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:04.047 06:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:05.419 06:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=22980673-5e09-4097-acdc-b98581a92ecd 00:31:05.419 06:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 22980673-5e09-4097-acdc-b98581a92ecd 00:31:05.419 06:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=22980673-5e09-4097-acdc-b98581a92ecd 00:31:05.419 06:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:05.419 06:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:05.419 06:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:05.419 06:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:05.678 06:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:05.678 { 00:31:05.678 "uuid": "e78f48b0-34ce-4b3b-a544-423cf23321f9", 00:31:05.678 "name": "lvs_0", 00:31:05.678 "base_bdev": "Nvme0n1", 00:31:05.678 "total_data_clusters": 930, 00:31:05.678 "free_clusters": 0, 00:31:05.678 "block_size": 512, 00:31:05.678 "cluster_size": 1073741824 00:31:05.678 }, 00:31:05.678 { 00:31:05.678 "uuid": "22980673-5e09-4097-acdc-b98581a92ecd", 00:31:05.678 "name": "lvs_n_0", 00:31:05.678 "base_bdev": "de964a02-8450-429a-8189-32342348a13c", 00:31:05.678 "total_data_clusters": 237847, 00:31:05.678 "free_clusters": 237847, 00:31:05.678 "block_size": 512, 00:31:05.678 "cluster_size": 4194304 00:31:05.678 } 00:31:05.678 ]' 00:31:05.678 06:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="22980673-5e09-4097-acdc-b98581a92ecd") .free_clusters' 00:31:05.678 06:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:31:05.678 06:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="22980673-5e09-4097-acdc-b98581a92ecd") .cluster_size' 00:31:05.678 06:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:05.678 06:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:31:05.678 06:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:31:05.678 951388 00:31:05.678 06:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:06.613 2fbc1e4b-6a81-4a84-a5cf-443910c4513f 00:31:06.613 06:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:06.871 06:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:07.129 06:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:07.387 06:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:07.387 06:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:07.387 06:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:07.387 06:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:07.387 06:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:07.387 06:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:07.387 06:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:07.387 06:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:07.387 06:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:07.387 06:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:07.387 06:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:07.387 06:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:07.387 06:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:07.387 06:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:07.387 06:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:31:07.387 06:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:07.387 06:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:07.645 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:07.645 fio-3.35 00:31:07.645 Starting 1 thread 00:31:07.903 EAL: No free 2048 kB hugepages reported on node 1 00:31:10.430 00:31:10.430 test: (groupid=0, jobs=1): err= 0: pid=262145: Fri Jul 26 06:23:21 2024 00:31:10.430 read: IOPS=4331, BW=16.9MiB/s (17.7MB/s)(34.0MiB/2011msec) 00:31:10.430 slat (usec): min=2, max=201, avg= 3.78, stdev= 3.40 00:31:10.430 clat (usec): min=6316, max=26071, avg=16259.46, stdev=1444.65 00:31:10.430 lat (usec): min=6326, max=26074, avg=16263.24, stdev=1444.46 00:31:10.430 clat percentiles (usec): 00:31:10.430 | 1.00th=[12780], 5.00th=[13960], 10.00th=[14484], 20.00th=[15139], 00:31:10.430 | 30.00th=[15533], 40.00th=[15926], 50.00th=[16188], 60.00th=[16581], 00:31:10.430 | 70.00th=[16909], 80.00th=[17433], 90.00th=[17957], 95.00th=[18482], 00:31:10.430 | 99.00th=[19530], 99.50th=[20317], 99.90th=[22938], 99.95th=[23462], 00:31:10.430 | 99.99th=[26084] 00:31:10.430 bw ( KiB/s): min=16360, max=17904, per=99.72%, avg=17278.00, stdev=671.06, samples=4 00:31:10.430 iops : min= 4090, max= 4476, avg=4319.50, stdev=167.76, samples=4 00:31:10.430 write: IOPS=4329, BW=16.9MiB/s (17.7MB/s)(34.0MiB/2011msec); 0 zone resets 00:31:10.430 slat (usec): min=3, max=155, avg= 3.96, stdev= 2.53 00:31:10.430 clat (usec): min=3204, max=22768, avg=13121.58, stdev=1233.98 00:31:10.430 lat (usec): min=3215, max=22771, avg=13125.53, stdev=1233.90 00:31:10.430 clat percentiles (usec): 00:31:10.430 | 1.00th=[10421], 5.00th=[11338], 10.00th=[11731], 20.00th=[12125], 00:31:10.430 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13042], 60.00th=[13435], 00:31:10.430 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14615], 95.00th=[15008], 00:31:10.430 | 99.00th=[16188], 99.50th=[16712], 99.90th=[18744], 99.95th=[20317], 00:31:10.430 | 99.99th=[22676] 00:31:10.430 bw ( KiB/s): min=17240, max=17344, per=99.90%, avg=17302.00, stdev=51.17, samples=4 00:31:10.430 iops : min= 4310, max= 4336, avg=4325.50, stdev=12.79, samples=4 00:31:10.430 lat (msec) : 4=0.02%, 10=0.34%, 20=99.26%, 50=0.38% 00:31:10.430 cpu : usr=60.00%, sys=37.46%, ctx=92, majf=0, minf=1533 00:31:10.430 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:31:10.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:10.430 issued rwts: total=8711,8707,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.430 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:10.430 00:31:10.430 Run status group 0 (all jobs): 00:31:10.430 READ: bw=16.9MiB/s (17.7MB/s), 16.9MiB/s-16.9MiB/s (17.7MB/s-17.7MB/s), io=34.0MiB (35.7MB), run=2011-2011msec 00:31:10.430 WRITE: bw=16.9MiB/s (17.7MB/s), 16.9MiB/s-16.9MiB/s (17.7MB/s-17.7MB/s), io=34.0MiB (35.7MB), run=2011-2011msec 00:31:10.430 ----------------------------------------------------- 00:31:10.430 Suppressions used: 00:31:10.430 count bytes template 00:31:10.430 1 58 /usr/src/fio/parse.c 00:31:10.430 1 8 libtcmalloc_minimal.so 00:31:10.430 ----------------------------------------------------- 00:31:10.430 00:31:10.430 06:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:10.688 06:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:10.688 06:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:14.879 06:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:15.137 06:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:18.427 06:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:18.427 06:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:20.332 06:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:20.332 06:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:20.332 06:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:20.332 06:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:20.332 06:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:31:20.332 06:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:20.332 06:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:31:20.332 06:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:20.332 06:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:20.332 rmmod nvme_tcp 00:31:20.332 rmmod nvme_fabrics 00:31:20.332 rmmod nvme_keyring 00:31:20.332 06:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:20.332 06:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:31:20.332 06:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:31:20.332 06:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 259089 ']' 00:31:20.332 06:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 259089 00:31:20.332 06:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 259089 ']' 00:31:20.332 06:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 259089 00:31:20.332 06:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:31:20.332 06:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:20.332 06:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 259089 00:31:20.332 06:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:20.332 06:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:20.332 06:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 259089' 00:31:20.332 killing process with pid 259089 00:31:20.332 06:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 259089 00:31:20.332 06:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 259089 00:31:21.709 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:31:21.988 06:23:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:21.988 06:23:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:21.988 06:23:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:21.988 06:23:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:21.988 06:23:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:21.988 06:23:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.988 06:23:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:21.988 06:23:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.909 06:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:23.909 00:31:23.909 real 0m41.374s 00:31:23.909 user 2m36.921s 00:31:23.909 sys 0m8.319s 00:31:23.909 06:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:23.909 06:23:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.909 ************************************ 00:31:23.909 END TEST nvmf_fio_host 00:31:23.909 ************************************ 00:31:23.909 06:23:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:23.909 06:23:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:23.909 06:23:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:23.909 06:23:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.909 ************************************ 00:31:23.909 START TEST nvmf_failover 00:31:23.909 ************************************ 00:31:23.909 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:24.169 * Looking for test storage... 00:31:24.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:24.169 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.170 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:24.170 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:24.170 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:31:24.170 06:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:26.072 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:26.072 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:26.072 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:26.072 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:26.072 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:26.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:26.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:31:26.073 00:31:26.073 --- 10.0.0.2 ping statistics --- 00:31:26.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.073 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:31:26.073 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:26.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:26.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:31:26.073 00:31:26.073 --- 10.0.0.1 ping statistics --- 00:31:26.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.073 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:31:26.073 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:26.073 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:31:26.073 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:26.073 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:26.073 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:26.073 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:26.073 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:26.073 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:26.073 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:26.073 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:26.073 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:26.073 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:26.073 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:26.073 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=265648 00:31:26.073 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:26.073 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 265648 00:31:26.073 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 265648 ']' 00:31:26.073 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.073 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:26.073 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.073 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:26.073 06:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:26.333 [2024-07-26 06:23:37.441480] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:26.333 [2024-07-26 06:23:37.441619] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:26.333 EAL: No free 2048 kB hugepages reported on node 1 00:31:26.333 [2024-07-26 06:23:37.581638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:26.591 [2024-07-26 06:23:37.840244] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:26.591 [2024-07-26 06:23:37.840322] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:26.591 [2024-07-26 06:23:37.840351] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:26.591 [2024-07-26 06:23:37.840385] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:26.591 [2024-07-26 06:23:37.840403] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:26.591 [2024-07-26 06:23:37.840520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:26.592 [2024-07-26 06:23:37.840561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:26.592 [2024-07-26 06:23:37.840571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:27.159 06:23:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:27.159 06:23:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:31:27.159 06:23:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:27.159 06:23:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:27.159 06:23:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:27.159 06:23:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:27.159 06:23:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:27.418 [2024-07-26 06:23:38.659854] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:27.418 06:23:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:27.985 Malloc0 00:31:27.985 06:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:27.985 06:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:28.243 06:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:28.500 [2024-07-26 06:23:39.771069] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:28.500 06:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:28.758 [2024-07-26 06:23:40.019946] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:28.758 06:23:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:29.017 [2024-07-26 06:23:40.276755] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:29.017 06:23:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=266062 00:31:29.017 06:23:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:29.017 06:23:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:29.017 06:23:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 266062 /var/tmp/bdevperf.sock 00:31:29.017 06:23:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 266062 ']' 00:31:29.017 06:23:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:29.017 06:23:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:29.017 06:23:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:29.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:29.017 06:23:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:29.017 06:23:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:30.392 06:23:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:30.392 06:23:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:31:30.392 06:23:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:30.650 NVMe0n1 00:31:30.650 06:23:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:30.907 00:31:30.907 06:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=266209 00:31:30.907 06:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:30.907 06:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:31.841 06:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:32.100 [2024-07-26 06:23:43.388510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:31:32.100 06:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:35.386 06:23:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:35.644 00:31:35.644 06:23:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:35.902 [2024-07-26 06:23:47.159292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:35.902 [2024-07-26 06:23:47.159403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:35.902 [2024-07-26 06:23:47.159426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:35.902 [2024-07-26 06:23:47.159460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:35.902 [2024-07-26 06:23:47.159477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:35.902 [2024-07-26 06:23:47.159494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:35.902 [2024-07-26 06:23:47.159511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:35.902 [2024-07-26 06:23:47.159528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:35.902 [2024-07-26 06:23:47.159545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:31:35.902 06:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:39.197 06:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:39.197 [2024-07-26 06:23:50.469603] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:39.197 06:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:40.160 06:23:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:40.418 [2024-07-26 06:23:51.727468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:31:40.418 [2024-07-26 06:23:51.727552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:31:40.418 [2024-07-26 06:23:51.727574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:31:40.418 [2024-07-26 06:23:51.727593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:31:40.418 [2024-07-26 06:23:51.727610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:31:40.418 [2024-07-26 06:23:51.727638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:31:40.418 [2024-07-26 06:23:51.727656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:31:40.418 06:23:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 266209 00:31:46.990 0 00:31:46.990 06:23:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 266062 00:31:46.990 06:23:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 266062 ']' 00:31:46.990 06:23:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 266062 00:31:46.990 06:23:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:31:46.990 06:23:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:46.990 06:23:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 266062 00:31:46.990 06:23:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:46.990 06:23:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:46.990 06:23:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 266062' 00:31:46.990 killing process with pid 266062 00:31:46.990 06:23:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 266062 00:31:46.990 06:23:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 266062 00:31:46.990 06:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:47.257 [2024-07-26 06:23:40.373320] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:47.257 [2024-07-26 06:23:40.373502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid266062 ] 00:31:47.257 EAL: No free 2048 kB hugepages reported on node 1 00:31:47.257 [2024-07-26 06:23:40.499555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.257 [2024-07-26 06:23:40.737337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.257 Running I/O for 15 seconds... 00:31:47.257 [2024-07-26 06:23:43.389619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:54592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.257 [2024-07-26 06:23:43.389678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.257 [2024-07-26 06:23:43.389741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:54600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.257 [2024-07-26 06:23:43.389794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.257 [2024-07-26 06:23:43.389839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.389876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.389918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.389955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.389994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.390030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.390095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.390133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.390173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.390208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.390250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.390287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.390326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.390376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.390413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.390447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.390487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.390522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.390572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.390606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.390645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.390679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.390719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.390754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.390791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.390840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.390879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.390914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.390966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.391001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.391055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.391101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.391142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.391180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.391220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.391255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.391298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.391334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.391388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.391424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.391464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.391499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.391539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.391581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.391620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.391656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.391695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.391731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.391771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.391807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.391846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.391880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.391921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.391957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.391997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.392033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.392097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.392136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.392178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.258 [2024-07-26 06:23:43.392215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.392256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:54608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.258 [2024-07-26 06:23:43.392293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.392334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:54616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.258 [2024-07-26 06:23:43.392384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.392424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.258 [2024-07-26 06:23:43.392459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.392500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:54632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.258 [2024-07-26 06:23:43.392536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.392581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:54640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.258 [2024-07-26 06:23:43.392617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.392656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.258 [2024-07-26 06:23:43.392691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.392731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:54656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.258 [2024-07-26 06:23:43.392768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.392808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:54664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.258 [2024-07-26 06:23:43.392843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.392883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:54672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.258 [2024-07-26 06:23:43.392920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.258 [2024-07-26 06:23:43.392958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:54680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.258 [2024-07-26 06:23:43.392993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.393035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.393092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.393136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:54696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.393174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.393215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:54704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.393252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.393296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:54712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.393332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.393386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.393423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.393463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.259 [2024-07-26 06:23:43.393498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.393538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:54728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.393579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.393620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:54736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.393657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.393698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:54744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.393734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.393773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:54752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.393808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.393849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:54760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.393884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.393922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.393958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.393997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:54776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.394032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.394094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:54784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.394132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.394174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:54792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.394212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.394252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:54800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.394288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.394330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.394381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.394421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:54816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.394458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.394500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.394536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.394575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.394616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.394657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:54840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.394693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.394733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.394769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.394806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:54856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.394841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.394882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.394918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.394958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:54872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.394994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.395034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.395092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.395136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.395172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.395213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:54896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.395249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.395290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:54904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.395327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.395370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:54912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.395420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.395459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:54920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.395495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.395536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:54928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.395571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.395620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:54936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.395657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.395697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.395732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.395772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:54952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.395808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.395847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.395881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.395923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:54968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.395971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.396012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:54976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.396046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.396112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:54984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.259 [2024-07-26 06:23:43.396148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.259 [2024-07-26 06:23:43.396190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.396227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.396268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:55000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.396304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.396345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.396396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.396437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.396473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.396513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.396548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.396590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:55032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.396632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.396672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:55040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.396707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.396748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.396783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.396824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:55056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.396861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.396900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.396935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.396976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.397011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.397051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:55080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.397116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.397159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.397196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.397237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:55096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.397275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.397316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:55104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.397351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.397407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:55112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.397445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.397483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:55120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.397518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.397558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:55128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.397593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.397638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.397675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.397715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:55144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.397750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.397791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:55152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.397827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.397865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:55160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.397901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.397941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.397977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.398016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:55176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.398075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.398119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:55184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.398155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.398197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:55192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.398234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.398276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.398312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.398369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.398405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.398446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.398482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.398521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:55224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.398557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.398598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.398638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.398678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:55240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.398714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.398754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.398789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.398830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.398865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.398904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.398939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.398978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.399014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.399076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.399116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.399158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.399194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.399237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.399275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.260 [2024-07-26 06:23:43.399317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:55304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.260 [2024-07-26 06:23:43.399367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:43.399408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:43.399443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:43.399484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:55320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:43.399520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:43.399561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:43.399596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:43.399636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:55336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:43.399678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:43.399719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:55344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:43.399754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:43.399796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:55352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:43.399832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:43.399896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.261 [2024-07-26 06:23:43.399930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.261 [2024-07-26 06:23:43.399964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55360 len:8 PRP1 0x0 PRP2 0x0 00:31:47.261 [2024-07-26 06:23:43.399999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:43.400449] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2f00 was disconnected and freed. reset controller. 00:31:47.261 [2024-07-26 06:23:43.400498] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:47.261 [2024-07-26 06:23:43.400589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.261 [2024-07-26 06:23:43.400629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:43.400670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.261 [2024-07-26 06:23:43.400706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:43.400743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.261 [2024-07-26 06:23:43.400779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:43.400817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.261 [2024-07-26 06:23:43.400852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:43.400888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.261 [2024-07-26 06:23:43.401023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:31:47.261 [2024-07-26 06:23:43.406448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.261 [2024-07-26 06:23:43.534459] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:47.261 [2024-07-26 06:23:47.161072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:47.161134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:47.161219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:47.161261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:47.161322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:47.161374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:47.161418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:47.161477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:47.161528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:47.161565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:47.161605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:47.161643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:47.161684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:47.161719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:47.161771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:47.161808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:47.161848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:47.161883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:47.161924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:47.161959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:47.161999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:47.162036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:47.162114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:47.162151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:47.162193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:47.162230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:47.162271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:47.162307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:47.162350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:47.162425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:47.162467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:47.162502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:47.162544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:47.162580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:47.162620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:47.162657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:47.162697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:47.162733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:47.162773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:47.162809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:47.162849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:47.162884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:47.162934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:47.162969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:47.163009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:47.163075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:47.163119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:47.163154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:47.163196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.261 [2024-07-26 06:23:47.163232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.261 [2024-07-26 06:23:47.163273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.262 [2024-07-26 06:23:47.163311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.262 [2024-07-26 06:23:47.163355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.262 [2024-07-26 06:23:47.163405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.262 [2024-07-26 06:23:47.163451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.262 [2024-07-26 06:23:47.163488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.262 [2024-07-26 06:23:47.163527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.262 [2024-07-26 06:23:47.163561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.262 [2024-07-26 06:23:47.163603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.262 [2024-07-26 06:23:47.163638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.262 [2024-07-26 06:23:47.163678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.262 [2024-07-26 06:23:47.163724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.262 [2024-07-26 06:23:47.163765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.262 [2024-07-26 06:23:47.163799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.262 [2024-07-26 06:23:47.163841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.262 [2024-07-26 06:23:47.163877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.262 [2024-07-26 06:23:47.163918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.262 [2024-07-26 06:23:47.163953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.262 [2024-07-26 06:23:47.163994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.262 [2024-07-26 06:23:47.164030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.262 [2024-07-26 06:23:47.164105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.262 [2024-07-26 06:23:47.164143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.262 [2024-07-26 06:23:47.164187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.262 [2024-07-26 06:23:47.164223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.262 [2024-07-26 06:23:47.164264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.262 [2024-07-26 06:23:47.164302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.262 [2024-07-26 06:23:47.164354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.262 [2024-07-26 06:23:47.164404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.262 [2024-07-26 06:23:47.164446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.262 [2024-07-26 06:23:47.164481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.262 [2024-07-26 06:23:47.164525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.262 [2024-07-26 06:23:47.164559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.262 [2024-07-26 06:23:47.164599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.262 [2024-07-26 06:23:47.164634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.262 [2024-07-26 06:23:47.164675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.262 [2024-07-26 06:23:47.164711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.262 [2024-07-26 06:23:47.164751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.164786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.164827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.164862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.164902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.164938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.164979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.165012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.165085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.165123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.165166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.165202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.165245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.165283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.165325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.165383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.165434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.165470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.165510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.165552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.165595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.165630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.165670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.165706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.165746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.165781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.165821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.165857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.165897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.165933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.165973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.166008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.166066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.166119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.166161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.166197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.166239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.166276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.166321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.166369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.166424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.166459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.166502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.166538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.166586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.166621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.166663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.166699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.166740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.166788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.166831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.166867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.166907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.166942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.166983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.167018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.167085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.167125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.167169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.167205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.167249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.167287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.167329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.167383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.167424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.167460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.167501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.167536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.167577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.167613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.167660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.167697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.167738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.167774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.167815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.167851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.167890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.167926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.167966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.263 [2024-07-26 06:23:47.168001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.263 [2024-07-26 06:23:47.168075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.168113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.168155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.168192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.168234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.168273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.168313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.168361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.168420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.168457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.168496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.168531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.168571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.168607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.168647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.168688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.168729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.168765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.168806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.168844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.168885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.168919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.168960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.168995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.169036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.169096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.169141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.169178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.169219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.169259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.169302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.169338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.169393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.169430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.169470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.169504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.169546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.169582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.169622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.169658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.169705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.169740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.169780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.169816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.169857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.169892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.169934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.169970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.170011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.170068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.170114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.170152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.170194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.170231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.170273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.170309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.170374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.264 [2024-07-26 06:23:47.170410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.170489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.264 [2024-07-26 06:23:47.170527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:8 PRP1 0x0 PRP2 0x0 00:31:47.264 [2024-07-26 06:23:47.170565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.170611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.264 [2024-07-26 06:23:47.170642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.264 [2024-07-26 06:23:47.170674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4936 len:8 PRP1 0x0 PRP2 0x0 00:31:47.264 [2024-07-26 06:23:47.170708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.170744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.264 [2024-07-26 06:23:47.170774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.264 [2024-07-26 06:23:47.170809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4944 len:8 PRP1 0x0 PRP2 0x0 00:31:47.264 [2024-07-26 06:23:47.170844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.170879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.264 [2024-07-26 06:23:47.170908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.264 [2024-07-26 06:23:47.170938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4952 len:8 PRP1 0x0 PRP2 0x0 00:31:47.264 [2024-07-26 06:23:47.170970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.171004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.264 [2024-07-26 06:23:47.171034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.264 [2024-07-26 06:23:47.171099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:8 PRP1 0x0 PRP2 0x0 00:31:47.264 [2024-07-26 06:23:47.171134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.171170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.264 [2024-07-26 06:23:47.171201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.264 [2024-07-26 06:23:47.171231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4968 len:8 PRP1 0x0 PRP2 0x0 00:31:47.264 [2024-07-26 06:23:47.171267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.264 [2024-07-26 06:23:47.171304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.264 [2024-07-26 06:23:47.171333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.264 [2024-07-26 06:23:47.171388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4976 len:8 PRP1 0x0 PRP2 0x0 00:31:47.265 [2024-07-26 06:23:47.171421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:47.171456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.265 [2024-07-26 06:23:47.171486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.265 [2024-07-26 06:23:47.171515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4984 len:8 PRP1 0x0 PRP2 0x0 00:31:47.265 [2024-07-26 06:23:47.171550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:47.171584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.265 [2024-07-26 06:23:47.171613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.265 [2024-07-26 06:23:47.171645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:8 PRP1 0x0 PRP2 0x0 00:31:47.265 [2024-07-26 06:23:47.171678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:47.171713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.265 [2024-07-26 06:23:47.171742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.265 [2024-07-26 06:23:47.171775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5000 len:8 PRP1 0x0 PRP2 0x0 00:31:47.265 [2024-07-26 06:23:47.171808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:47.171855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.265 [2024-07-26 06:23:47.171891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.265 [2024-07-26 06:23:47.171923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5008 len:8 PRP1 0x0 PRP2 0x0 00:31:47.265 [2024-07-26 06:23:47.171955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:47.171990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.265 [2024-07-26 06:23:47.172019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.265 [2024-07-26 06:23:47.172074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5016 len:8 PRP1 0x0 PRP2 0x0 00:31:47.265 [2024-07-26 06:23:47.172112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:47.172147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.265 [2024-07-26 06:23:47.172178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.265 [2024-07-26 06:23:47.172209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:8 PRP1 0x0 PRP2 0x0 00:31:47.265 [2024-07-26 06:23:47.172242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:47.172279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.265 [2024-07-26 06:23:47.172309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.265 [2024-07-26 06:23:47.172341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5032 len:8 PRP1 0x0 PRP2 0x0 00:31:47.265 [2024-07-26 06:23:47.172398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:47.172432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.265 [2024-07-26 06:23:47.172462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.265 [2024-07-26 06:23:47.172491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4272 len:8 PRP1 0x0 PRP2 0x0 00:31:47.265 [2024-07-26 06:23:47.172526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:47.172561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.265 [2024-07-26 06:23:47.172590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.265 [2024-07-26 06:23:47.172621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4280 len:8 PRP1 0x0 PRP2 0x0 00:31:47.265 [2024-07-26 06:23:47.172653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:47.173036] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f3180 was disconnected and freed. reset controller. 00:31:47.265 [2024-07-26 06:23:47.173115] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:47.265 [2024-07-26 06:23:47.173193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.265 [2024-07-26 06:23:47.173232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:47.173272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.265 [2024-07-26 06:23:47.173309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:47.173355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.265 [2024-07-26 06:23:47.173396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:47.173435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.265 [2024-07-26 06:23:47.173471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:47.173507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.265 [2024-07-26 06:23:47.173622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:31:47.265 [2024-07-26 06:23:47.179186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.265 [2024-07-26 06:23:47.348667] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:47.265 [2024-07-26 06:23:51.729708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.265 [2024-07-26 06:23:51.729770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:51.729832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.265 [2024-07-26 06:23:51.729871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:51.729916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.265 [2024-07-26 06:23:51.729954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:51.729998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.265 [2024-07-26 06:23:51.730037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:51.730104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.265 [2024-07-26 06:23:51.730145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:51.730189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.265 [2024-07-26 06:23:51.730227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:51.730270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.265 [2024-07-26 06:23:51.730307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:51.730350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.265 [2024-07-26 06:23:51.730401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:51.730442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.265 [2024-07-26 06:23:51.730477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:51.730518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.265 [2024-07-26 06:23:51.730561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:51.730602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.265 [2024-07-26 06:23:51.730639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:51.730679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.265 [2024-07-26 06:23:51.730716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:51.730758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.265 [2024-07-26 06:23:51.730794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:51.730833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.265 [2024-07-26 06:23:51.730869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:51.730910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.265 [2024-07-26 06:23:51.730945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:51.730987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.265 [2024-07-26 06:23:51.731024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:51.731088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.265 [2024-07-26 06:23:51.731126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.265 [2024-07-26 06:23:51.731167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.266 [2024-07-26 06:23:51.731205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.731247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.266 [2024-07-26 06:23:51.731284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.731326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.266 [2024-07-26 06:23:51.731363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.731417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.266 [2024-07-26 06:23:51.731454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.731494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.266 [2024-07-26 06:23:51.731527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.731576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.266 [2024-07-26 06:23:51.731614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.731653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.266 [2024-07-26 06:23:51.731688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.731728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.266 [2024-07-26 06:23:51.731763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.731803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.266 [2024-07-26 06:23:51.731839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.731879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.266 [2024-07-26 06:23:51.731914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.731956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.266 [2024-07-26 06:23:51.731992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.732031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.266 [2024-07-26 06:23:51.732090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.732144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.266 [2024-07-26 06:23:51.732181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.732221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.266 [2024-07-26 06:23:51.732258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.732301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.266 [2024-07-26 06:23:51.732338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.732394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.266 [2024-07-26 06:23:51.732430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.732470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.266 [2024-07-26 06:23:51.732505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.732548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.266 [2024-07-26 06:23:51.732582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.732627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.266 [2024-07-26 06:23:51.732664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.732705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.266 [2024-07-26 06:23:51.732740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.732779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.266 [2024-07-26 06:23:51.732815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.732855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.266 [2024-07-26 06:23:51.732891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.732932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.266 [2024-07-26 06:23:51.732969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.733009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.266 [2024-07-26 06:23:51.733067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.733111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.266 [2024-07-26 06:23:51.733148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.733191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.266 [2024-07-26 06:23:51.733229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.733271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.266 [2024-07-26 06:23:51.733307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.733351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.266 [2024-07-26 06:23:51.733402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.733460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.266 [2024-07-26 06:23:51.733496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.733537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.266 [2024-07-26 06:23:51.733573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.733614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.266 [2024-07-26 06:23:51.733654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.733697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.266 [2024-07-26 06:23:51.733734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.733775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.266 [2024-07-26 06:23:51.733811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.733852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.266 [2024-07-26 06:23:51.733888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.733927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.266 [2024-07-26 06:23:51.733964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.734006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.266 [2024-07-26 06:23:51.734042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.734109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.266 [2024-07-26 06:23:51.734148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.734190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.266 [2024-07-26 06:23:51.734226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.734269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.266 [2024-07-26 06:23:51.734306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.266 [2024-07-26 06:23:51.734347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.266 [2024-07-26 06:23:51.734398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.734438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.267 [2024-07-26 06:23:51.734473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.734515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.267 [2024-07-26 06:23:51.734552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.734591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.267 [2024-07-26 06:23:51.734627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.734677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.267 [2024-07-26 06:23:51.734714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.734755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.267 [2024-07-26 06:23:51.734790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.734831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.267 [2024-07-26 06:23:51.734866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.734908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.267 [2024-07-26 06:23:51.734945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.734987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.267 [2024-07-26 06:23:51.735023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.735086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.267 [2024-07-26 06:23:51.735125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.735167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.267 [2024-07-26 06:23:51.735203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.735245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.267 [2024-07-26 06:23:51.735284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.735326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.267 [2024-07-26 06:23:51.735363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.735419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.267 [2024-07-26 06:23:51.735454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.735495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.267 [2024-07-26 06:23:51.735533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.735573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.267 [2024-07-26 06:23:51.735608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.735649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.267 [2024-07-26 06:23:51.735684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.735730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.267 [2024-07-26 06:23:51.735767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.735808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.267 [2024-07-26 06:23:51.735844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.735884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.267 [2024-07-26 06:23:51.735920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.735960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.267 [2024-07-26 06:23:51.735995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.736036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.267 [2024-07-26 06:23:51.736094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.736137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.267 [2024-07-26 06:23:51.736174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.736252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.267 [2024-07-26 06:23:51.736290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:944 len:8 PRP1 0x0 PRP2 0x0 00:31:47.267 [2024-07-26 06:23:51.736328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.736387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.267 [2024-07-26 06:23:51.736418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.267 [2024-07-26 06:23:51.736450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:952 len:8 PRP1 0x0 PRP2 0x0 00:31:47.267 [2024-07-26 06:23:51.736483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.736520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.267 [2024-07-26 06:23:51.736549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.267 [2024-07-26 06:23:51.736580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:8 PRP1 0x0 PRP2 0x0 00:31:47.267 [2024-07-26 06:23:51.736614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.736648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.267 [2024-07-26 06:23:51.736679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.267 [2024-07-26 06:23:51.736709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:968 len:8 PRP1 0x0 PRP2 0x0 00:31:47.267 [2024-07-26 06:23:51.736742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.736777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.267 [2024-07-26 06:23:51.736811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.267 [2024-07-26 06:23:51.736844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:976 len:8 PRP1 0x0 PRP2 0x0 00:31:47.267 [2024-07-26 06:23:51.736878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.736912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.267 [2024-07-26 06:23:51.736942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.267 [2024-07-26 06:23:51.736972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:984 len:8 PRP1 0x0 PRP2 0x0 00:31:47.267 [2024-07-26 06:23:51.737007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.737056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.267 [2024-07-26 06:23:51.737095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.267 [2024-07-26 06:23:51.737128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:8 PRP1 0x0 PRP2 0x0 00:31:47.267 [2024-07-26 06:23:51.737161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.267 [2024-07-26 06:23:51.737197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.267 [2024-07-26 06:23:51.737227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.268 [2024-07-26 06:23:51.737259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1000 len:8 PRP1 0x0 PRP2 0x0 00:31:47.268 [2024-07-26 06:23:51.737294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.268 [2024-07-26 06:23:51.737328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.268 [2024-07-26 06:23:51.737374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.268 [2024-07-26 06:23:51.737405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1008 len:8 PRP1 0x0 PRP2 0x0 00:31:47.268 [2024-07-26 06:23:51.737438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.268 [2024-07-26 06:23:51.737473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.268 [2024-07-26 06:23:51.737502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.268 [2024-07-26 06:23:51.737534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1016 len:8 PRP1 0x0 PRP2 0x0 00:31:47.268 [2024-07-26 06:23:51.737566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.268 [2024-07-26 06:23:51.737600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.268 [2024-07-26 06:23:51.737628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.268 [2024-07-26 06:23:51.737658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:8 PRP1 0x0 PRP2 0x0 00:31:47.268 [2024-07-26 06:23:51.737691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.268 [2024-07-26 06:23:51.737725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.268 [2024-07-26 06:23:51.737755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.268 [2024-07-26 06:23:51.737784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1032 len:8 PRP1 0x0 PRP2 0x0 00:31:47.268 [2024-07-26 06:23:51.737819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.268 [2024-07-26 06:23:51.737859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.268 [2024-07-26 06:23:51.737887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.268 [2024-07-26 06:23:51.737919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1040 len:8 PRP1 0x0 PRP2 0x0 00:31:47.268 [2024-07-26 06:23:51.737953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.268 [2024-07-26 06:23:51.737988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.268 [2024-07-26 06:23:51.738018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.268 [2024-07-26 06:23:51.738086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1048 len:8 PRP1 0x0 PRP2 0x0 00:31:47.268 [2024-07-26 06:23:51.738125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.268 [2024-07-26 06:23:51.738162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.268 [2024-07-26 06:23:51.738191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.268 [2024-07-26 06:23:51.738224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:8 PRP1 0x0 PRP2 0x0 00:31:47.268 [2024-07-26 06:23:51.738259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.268 [2024-07-26 06:23:51.738294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.268 [2024-07-26 06:23:51.738337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.268 [2024-07-26 06:23:51.738384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1064 len:8 PRP1 0x0 PRP2 0x0 00:31:47.268 [2024-07-26 06:23:51.738418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.268 [2024-07-26 06:23:51.738454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.268 [2024-07-26 06:23:51.738483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.268 [2024-07-26 06:23:51.738514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1072 len:8 PRP1 0x0 PRP2 0x0 00:31:47.268 [2024-07-26 06:23:51.738547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.268 [2024-07-26 06:23:51.738581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.268 [2024-07-26 06:23:51.738611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.268 [2024-07-26 06:23:51.738641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1080 len:8 PRP1 0x0 PRP2 0x0 00:31:47.268 [2024-07-26 06:23:51.738674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.268 [2024-07-26 06:23:51.738708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.268 [2024-07-26 06:23:51.738736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.268 [2024-07-26 06:23:51.738768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:8 PRP1 0x0 PRP2 0x0 00:31:47.268 [2024-07-26 06:23:51.738801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.268 [2024-07-26 06:23:51.738837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.268 [2024-07-26 06:23:51.738866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.268 [2024-07-26 06:23:51.738896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1096 len:8 PRP1 0x0 PRP2 0x0 00:31:47.268 [2024-07-26 06:23:51.738936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.268 [2024-07-26 06:23:51.738970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.268 [2024-07-26 06:23:51.738999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.268 [2024-07-26 06:23:51.739029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1104 len:8 PRP1 0x0 PRP2 0x0 00:31:47.268 [2024-07-26 06:23:51.739087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.268 [2024-07-26 06:23:51.739136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.268 [2024-07-26 06:23:51.739167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.268 [2024-07-26 06:23:51.739197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1112 len:8 PRP1 0x0 PRP2 0x0 00:31:47.268 [2024-07-26 06:23:51.739231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.268 [2024-07-26 06:23:51.739267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.268 [2024-07-26 06:23:51.739297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.268 [2024-07-26 06:23:51.739328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:8 PRP1 0x0 PRP2 0x0 00:31:47.268 [2024-07-26 06:23:51.739376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.268 [2024-07-26 06:23:51.739409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.268 [2024-07-26 06:23:51.739438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.268 [2024-07-26 06:23:51.739469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1128 len:8 PRP1 0x0 PRP2 0x0 00:31:47.268 [2024-07-26 06:23:51.739502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.268 [2024-07-26 06:23:51.739536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.268 [2024-07-26 06:23:51.739564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.268 [2024-07-26 06:23:51.739595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1136 len:8 PRP1 0x0 PRP2 0x0 00:31:47.268 [2024-07-26 06:23:51.739629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.268 [2024-07-26 06:23:51.739663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.268 [2024-07-26 06:23:51.739692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.268 [2024-07-26 06:23:51.739722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1144 len:8 PRP1 0x0 PRP2 0x0 00:31:47.268 [2024-07-26 06:23:51.739756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.268 [2024-07-26 06:23:51.739791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.268 [2024-07-26 06:23:51.739819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.268 [2024-07-26 06:23:51.739850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:8 PRP1 0x0 PRP2 0x0 00:31:47.268 [2024-07-26 06:23:51.739883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.268 [2024-07-26 06:23:51.739918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.268 [2024-07-26 06:23:51.739947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.268 [2024-07-26 06:23:51.739982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1160 len:8 PRP1 0x0 PRP2 0x0 00:31:47.268 [2024-07-26 06:23:51.740017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.268 [2024-07-26 06:23:51.740072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.268 [2024-07-26 06:23:51.740103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.268 [2024-07-26 06:23:51.740136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1168 len:8 PRP1 0x0 PRP2 0x0 00:31:47.268 [2024-07-26 06:23:51.740171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.268 [2024-07-26 06:23:51.740207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.268 [2024-07-26 06:23:51.740236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.268 [2024-07-26 06:23:51.740268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1176 len:8 PRP1 0x0 PRP2 0x0 00:31:47.268 [2024-07-26 06:23:51.740302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.268 [2024-07-26 06:23:51.740337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.268 [2024-07-26 06:23:51.740381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.268 [2024-07-26 06:23:51.740411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:8 PRP1 0x0 PRP2 0x0 00:31:47.269 [2024-07-26 06:23:51.740445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.269 [2024-07-26 06:23:51.740479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.269 [2024-07-26 06:23:51.740508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.269 [2024-07-26 06:23:51.740540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1192 len:8 PRP1 0x0 PRP2 0x0 00:31:47.269 [2024-07-26 06:23:51.740572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.269 [2024-07-26 06:23:51.740606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.269 [2024-07-26 06:23:51.740636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.269 [2024-07-26 06:23:51.740666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1200 len:8 PRP1 0x0 PRP2 0x0 00:31:47.269 [2024-07-26 06:23:51.740699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.269 [2024-07-26 06:23:51.740732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.269 [2024-07-26 06:23:51.740760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.269 [2024-07-26 06:23:51.740790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1208 len:8 PRP1 0x0 PRP2 0x0 00:31:47.269 [2024-07-26 06:23:51.740823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.269 [2024-07-26 06:23:51.740857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.269 [2024-07-26 06:23:51.740886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.269 [2024-07-26 06:23:51.740917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:8 PRP1 0x0 PRP2 0x0 00:31:47.269 [2024-07-26 06:23:51.740949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.269 [2024-07-26 06:23:51.740982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.269 [2024-07-26 06:23:51.741017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.269 [2024-07-26 06:23:51.741069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1224 len:8 PRP1 0x0 PRP2 0x0 00:31:47.269 [2024-07-26 06:23:51.741106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.269 [2024-07-26 06:23:51.741141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.269 [2024-07-26 06:23:51.741171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.269 [2024-07-26 06:23:51.741204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1232 len:8 PRP1 0x0 PRP2 0x0 00:31:47.269 [2024-07-26 06:23:51.741237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.269 [2024-07-26 06:23:51.741271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.269 [2024-07-26 06:23:51.741301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.269 [2024-07-26 06:23:51.741333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1240 len:8 PRP1 0x0 PRP2 0x0 00:31:47.269 [2024-07-26 06:23:51.741382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.269 [2024-07-26 06:23:51.741416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.269 [2024-07-26 06:23:51.741444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.269 [2024-07-26 06:23:51.741475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:8 PRP1 0x0 PRP2 0x0 00:31:47.269 [2024-07-26 06:23:51.741508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.269 [2024-07-26 06:23:51.741542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.269 [2024-07-26 06:23:51.741570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.269 [2024-07-26 06:23:51.741600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1256 len:8 PRP1 0x0 PRP2 0x0 00:31:47.269 [2024-07-26 06:23:51.741634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.269 [2024-07-26 06:23:51.741667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.269 [2024-07-26 06:23:51.741698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.269 [2024-07-26 06:23:51.741729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1264 len:8 PRP1 0x0 PRP2 0x0 00:31:47.269 [2024-07-26 06:23:51.741763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.269 [2024-07-26 06:23:51.741797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.269 [2024-07-26 06:23:51.741826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.269 [2024-07-26 06:23:51.741858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1272 len:8 PRP1 0x0 PRP2 0x0 00:31:47.269 [2024-07-26 06:23:51.741890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.269 [2024-07-26 06:23:51.741924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.269 [2024-07-26 06:23:51.741953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.269 [2024-07-26 06:23:51.741983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:8 PRP1 0x0 PRP2 0x0 00:31:47.269 [2024-07-26 06:23:51.742018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.269 [2024-07-26 06:23:51.742079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.269 [2024-07-26 06:23:51.742109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.269 [2024-07-26 06:23:51.742141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1288 len:8 PRP1 0x0 PRP2 0x0 00:31:47.269 [2024-07-26 06:23:51.742175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.269 [2024-07-26 06:23:51.742212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.269 [2024-07-26 06:23:51.742243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.269 [2024-07-26 06:23:51.742274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1296 len:8 PRP1 0x0 PRP2 0x0 00:31:47.269 [2024-07-26 06:23:51.742309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.269 [2024-07-26 06:23:51.742344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.269 [2024-07-26 06:23:51.742388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.269 [2024-07-26 06:23:51.742418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1304 len:8 PRP1 0x0 PRP2 0x0 00:31:47.269 [2024-07-26 06:23:51.742453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.269 [2024-07-26 06:23:51.742487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.269 [2024-07-26 06:23:51.742515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.269 [2024-07-26 06:23:51.742546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:536 len:8 PRP1 0x0 PRP2 0x0 00:31:47.269 [2024-07-26 06:23:51.742578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.269 [2024-07-26 06:23:51.742612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.269 [2024-07-26 06:23:51.742685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.269 [2024-07-26 06:23:51.742716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:544 len:8 PRP1 0x0 PRP2 0x0 00:31:47.269 [2024-07-26 06:23:51.742750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.269 [2024-07-26 06:23:51.742784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.269 [2024-07-26 06:23:51.742815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.269 [2024-07-26 06:23:51.742845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:552 len:8 PRP1 0x0 PRP2 0x0 00:31:47.269 [2024-07-26 06:23:51.742880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.269 [2024-07-26 06:23:51.743300] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f3900 was disconnected and freed. reset controller. 00:31:47.269 [2024-07-26 06:23:51.743345] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:47.269 [2024-07-26 06:23:51.743452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.269 [2024-07-26 06:23:51.743492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.269 [2024-07-26 06:23:51.743532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.269 [2024-07-26 06:23:51.743567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.269 [2024-07-26 06:23:51.743611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.269 [2024-07-26 06:23:51.743648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.269 [2024-07-26 06:23:51.743686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.269 [2024-07-26 06:23:51.743721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.269 [2024-07-26 06:23:51.743756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.269 [2024-07-26 06:23:51.743887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:31:47.269 [2024-07-26 06:23:51.749715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.269 [2024-07-26 06:23:51.837266] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:47.269 00:31:47.269 Latency(us) 00:31:47.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:47.269 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:47.269 Verification LBA range: start 0x0 length 0x4000 00:31:47.269 NVMe0n1 : 15.01 5990.48 23.40 809.23 0.00 18789.62 1061.93 28738.75 00:31:47.270 =================================================================================================================== 00:31:47.270 Total : 5990.48 23.40 809.23 0.00 18789.62 1061.93 28738.75 00:31:47.270 Received shutdown signal, test time was about 15.000000 seconds 00:31:47.270 00:31:47.270 Latency(us) 00:31:47.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:47.270 =================================================================================================================== 00:31:47.270 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:47.270 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:31:47.270 06:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:47.270 06:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:47.270 06:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:47.270 06:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=268171 00:31:47.270 06:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:47.270 06:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 268171 /var/tmp/bdevperf.sock 00:31:47.270 06:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 268171 ']' 00:31:47.270 06:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:47.270 06:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:47.270 06:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:47.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:47.270 06:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:47.270 06:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:48.204 06:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:48.204 06:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:31:48.204 06:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:48.461 [2024-07-26 06:23:59.555138] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:48.462 06:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:48.719 [2024-07-26 06:23:59.803868] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:48.719 06:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:48.977 NVMe0n1 00:31:48.977 06:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:49.542 00:31:49.542 06:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:49.799 00:31:49.799 06:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:49.799 06:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:50.057 06:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:50.314 06:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:53.607 06:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:53.607 06:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:53.607 06:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=268966 00:31:53.607 06:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:53.607 06:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 268966 00:31:54.984 0 00:31:54.984 06:24:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:54.984 [2024-07-26 06:23:58.409159] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:54.984 [2024-07-26 06:23:58.409309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid268171 ] 00:31:54.984 EAL: No free 2048 kB hugepages reported on node 1 00:31:54.984 [2024-07-26 06:23:58.530628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.984 [2024-07-26 06:23:58.770439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.984 [2024-07-26 06:24:01.478548] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:54.984 [2024-07-26 06:24:01.478709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.984 [2024-07-26 06:24:01.478763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.984 [2024-07-26 06:24:01.478810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.984 [2024-07-26 06:24:01.478847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.984 [2024-07-26 06:24:01.478886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.984 [2024-07-26 06:24:01.478922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.984 [2024-07-26 06:24:01.478960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.984 [2024-07-26 06:24:01.478994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.984 [2024-07-26 06:24:01.479029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:54.984 [2024-07-26 06:24:01.479168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:54.984 [2024-07-26 06:24:01.479239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:31:54.984 [2024-07-26 06:24:01.612349] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:54.984 Running I/O for 1 seconds... 00:31:54.984 00:31:54.984 Latency(us) 00:31:54.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:54.984 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:54.984 Verification LBA range: start 0x0 length 0x4000 00:31:54.984 NVMe0n1 : 1.01 6221.71 24.30 0.00 0.00 20481.64 4004.98 18932.62 00:31:54.984 =================================================================================================================== 00:31:54.984 Total : 6221.71 24.30 0.00 0.00 20481.64 4004.98 18932.62 00:31:54.984 06:24:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:54.984 06:24:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:54.984 06:24:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:55.242 06:24:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:55.242 06:24:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:55.499 06:24:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:55.757 06:24:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:59.037 06:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:59.037 06:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:59.037 06:24:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 268171 00:31:59.037 06:24:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 268171 ']' 00:31:59.037 06:24:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 268171 00:31:59.037 06:24:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:31:59.037 06:24:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:59.037 06:24:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 268171 00:31:59.037 06:24:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:59.037 06:24:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:59.037 06:24:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 268171' 00:31:59.037 killing process with pid 268171 00:31:59.037 06:24:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 268171 00:31:59.037 06:24:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 268171 00:32:00.011 06:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:00.011 06:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:00.269 06:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:00.269 06:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:00.269 06:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:00.269 06:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:00.269 06:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:32:00.269 06:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:00.269 06:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:32:00.269 06:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:00.269 06:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:00.269 rmmod nvme_tcp 00:32:00.269 rmmod nvme_fabrics 00:32:00.269 rmmod nvme_keyring 00:32:00.269 06:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:00.269 06:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:32:00.269 06:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:32:00.269 06:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 265648 ']' 00:32:00.269 06:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 265648 00:32:00.269 06:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 265648 ']' 00:32:00.269 06:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 265648 00:32:00.269 06:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:00.269 06:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:00.269 06:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 265648 00:32:00.269 06:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:00.269 06:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:00.269 06:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 265648' 00:32:00.269 killing process with pid 265648 00:32:00.269 06:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 265648 00:32:00.269 06:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 265648 00:32:01.648 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:32:01.648 06:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:01.648 06:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:01.648 06:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:01.648 06:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:01.648 06:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:01.648 06:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.648 06:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:01.648 06:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:04.190 00:32:04.190 real 0m39.819s 00:32:04.190 user 2m19.487s 00:32:04.190 sys 0m6.152s 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:04.190 ************************************ 00:32:04.190 END TEST nvmf_failover 00:32:04.190 ************************************ 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.190 ************************************ 00:32:04.190 START TEST nvmf_host_discovery 00:32:04.190 ************************************ 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:04.190 * Looking for test storage... 00:32:04.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:04.190 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:04.191 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:04.191 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:04.191 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:04.191 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.191 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:04.191 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.191 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:04.191 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:04.191 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:32:04.191 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:06.093 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:06.093 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:06.093 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:06.094 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:06.094 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:06.094 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:06.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:06.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:32:06.094 00:32:06.094 --- 10.0.0.2 ping statistics --- 00:32:06.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.094 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:06.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:06.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:32:06.094 00:32:06.094 --- 10.0.0.1 ping statistics --- 00:32:06.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.094 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=272321 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 272321 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 272321 ']' 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:06.094 06:24:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:06.094 [2024-07-26 06:24:17.231072] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:06.094 [2024-07-26 06:24:17.231225] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.094 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.094 [2024-07-26 06:24:17.367143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.354 [2024-07-26 06:24:17.623728] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:06.354 [2024-07-26 06:24:17.623815] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:06.354 [2024-07-26 06:24:17.623845] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:06.354 [2024-07-26 06:24:17.623870] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:06.354 [2024-07-26 06:24:17.623892] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:06.354 [2024-07-26 06:24:17.623949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:06.919 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:06.919 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:32:06.919 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:06.919 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:06.919 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:06.919 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:06.919 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:06.919 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.919 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:06.919 [2024-07-26 06:24:18.198804] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:06.919 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.919 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:06.919 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.919 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:06.919 [2024-07-26 06:24:18.207081] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:06.919 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.919 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:06.919 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.919 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:06.919 null0 00:32:06.919 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.919 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:06.919 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.919 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:06.919 null1 00:32:06.919 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.919 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:06.920 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.920 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:06.920 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.920 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=272474 00:32:06.920 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 272474 /tmp/host.sock 00:32:06.920 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 272474 ']' 00:32:06.920 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:32:06.920 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:06.920 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:06.920 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:06.920 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:06.920 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:06.920 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:07.176 [2024-07-26 06:24:18.322020] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:07.176 [2024-07-26 06:24:18.322181] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid272474 ] 00:32:07.176 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.176 [2024-07-26 06:24:18.443834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.434 [2024-07-26 06:24:18.673645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:08.000 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:08.000 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:32:08.000 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:08.000 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:08.000 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.000 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:08.000 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.000 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:08.000 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.000 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:08.000 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.000 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:08.000 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:08.000 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:08.000 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:08.000 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.000 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:08.000 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:08.000 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:08.000 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:08.259 [2024-07-26 06:24:19.583047] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:08.259 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:08.517 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.517 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:32:08.518 06:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:09.085 [2024-07-26 06:24:20.351356] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:09.085 [2024-07-26 06:24:20.351420] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:09.085 [2024-07-26 06:24:20.351468] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:09.343 [2024-07-26 06:24:20.437771] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:09.343 [2024-07-26 06:24:20.542742] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:09.343 [2024-07-26 06:24:20.542782] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.601 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:09.602 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:09.602 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:09.602 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:09.602 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:09.602 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.602 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:09.602 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.602 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:09.602 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:09.602 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:09.602 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:09.602 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:09.602 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:09.602 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:09.602 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:09.602 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.602 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:09.602 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:09.602 06:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:09.859 06:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.859 06:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:09.859 06:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:09.859 06:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:09.859 06:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:09.859 06:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:09.859 06:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:09.859 06:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:09.859 06:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:09.859 06:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:09.859 06:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:09.859 06:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:09.859 06:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.859 06:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:09.859 06:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:09.859 06:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.117 06:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:10.117 06:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:10.117 06:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:10.117 06:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.054 [2024-07-26 06:24:22.252343] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:11.054 [2024-07-26 06:24:22.253570] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:11.054 [2024-07-26 06:24:22.253659] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.054 [2024-07-26 06:24:22.339924] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.054 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:11.055 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.055 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:11.055 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:11.055 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.313 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:11.313 06:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:11.313 [2024-07-26 06:24:22.446960] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:11.313 [2024-07-26 06:24:22.446999] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:11.313 [2024-07-26 06:24:22.447019] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.248 [2024-07-26 06:24:23.485648] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:12.248 [2024-07-26 06:24:23.485720] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:12.248 [2024-07-26 06:24:23.489010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:12.248 [2024-07-26 06:24:23.489082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.248 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:12.248 [2024-07-26 06:24:23.489136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:12.249 [2024-07-26 06:24:23.489172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.249 [2024-07-26 06:24:23.489215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:12.249 [2024-07-26 06:24:23.489250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.249 [2024-07-26 06:24:23.489287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:12.249 [2024-07-26 06:24:23.489324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.249 [2024-07-26 06:24:23.489359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:12.249 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:12.249 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:12.249 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:12.249 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.249 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.249 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:12.249 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:12.249 [2024-07-26 06:24:23.498990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:12.249 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.249 [2024-07-26 06:24:23.509038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:12.249 [2024-07-26 06:24:23.509644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.249 [2024-07-26 06:24:23.509716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:32:12.249 [2024-07-26 06:24:23.509772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:12.249 [2024-07-26 06:24:23.509840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:12.249 [2024-07-26 06:24:23.509943] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:12.249 [2024-07-26 06:24:23.509997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:12.249 [2024-07-26 06:24:23.510075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:12.249 [2024-07-26 06:24:23.510132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.249 [2024-07-26 06:24:23.519181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:12.249 [2024-07-26 06:24:23.519407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.249 [2024-07-26 06:24:23.519449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:32:12.249 [2024-07-26 06:24:23.519489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:12.249 [2024-07-26 06:24:23.519541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:12.249 [2024-07-26 06:24:23.519590] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:12.249 [2024-07-26 06:24:23.519642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:12.249 [2024-07-26 06:24:23.519675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:12.249 [2024-07-26 06:24:23.519734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.249 [2024-07-26 06:24:23.529287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:12.249 [2024-07-26 06:24:23.529558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.249 [2024-07-26 06:24:23.529602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:32:12.249 [2024-07-26 06:24:23.529655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:12.249 [2024-07-26 06:24:23.529706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:12.249 [2024-07-26 06:24:23.529754] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:12.249 [2024-07-26 06:24:23.529810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:12.249 [2024-07-26 06:24:23.529858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:12.249 [2024-07-26 06:24:23.529901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.249 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.249 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:12.249 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:12.249 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:12.249 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:12.249 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:12.249 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:12.249 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:12.249 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:12.249 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.249 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.249 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:12.249 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:12.249 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:12.249 [2024-07-26 06:24:23.539549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:12.249 [2024-07-26 06:24:23.539818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.249 [2024-07-26 06:24:23.539861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:32:12.249 [2024-07-26 06:24:23.539901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:12.249 [2024-07-26 06:24:23.539954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:12.249 [2024-07-26 06:24:23.540025] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:12.249 [2024-07-26 06:24:23.540071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:12.249 [2024-07-26 06:24:23.540117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:12.249 [2024-07-26 06:24:23.540165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.249 [2024-07-26 06:24:23.549651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:12.249 [2024-07-26 06:24:23.549875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.249 [2024-07-26 06:24:23.549917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:32:12.249 [2024-07-26 06:24:23.549955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:12.249 [2024-07-26 06:24:23.550008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:12.249 [2024-07-26 06:24:23.550087] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:12.249 [2024-07-26 06:24:23.550135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:12.249 [2024-07-26 06:24:23.550177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:12.249 [2024-07-26 06:24:23.550227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.249 [2024-07-26 06:24:23.559751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:12.249 [2024-07-26 06:24:23.560071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.249 [2024-07-26 06:24:23.560113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:32:12.249 [2024-07-26 06:24:23.560151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:12.249 [2024-07-26 06:24:23.560203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:12.249 [2024-07-26 06:24:23.560273] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:12.249 [2024-07-26 06:24:23.560309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:12.249 [2024-07-26 06:24:23.560344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:12.249 [2024-07-26 06:24:23.560407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.249 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.249 [2024-07-26 06:24:23.569860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:12.249 [2024-07-26 06:24:23.570134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.249 [2024-07-26 06:24:23.570176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:32:12.249 [2024-07-26 06:24:23.570216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:12.249 [2024-07-26 06:24:23.570269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:12.249 [2024-07-26 06:24:23.570341] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:12.249 [2024-07-26 06:24:23.570377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:12.249 [2024-07-26 06:24:23.570411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:12.250 [2024-07-26 06:24:23.570477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.250 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:12.250 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:12.250 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:12.250 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:12.250 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:12.250 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:12.250 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:12.250 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:12.250 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:12.250 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:12.250 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.250 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.250 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:12.250 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:12.250 [2024-07-26 06:24:23.579959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:12.250 [2024-07-26 06:24:23.580249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.250 [2024-07-26 06:24:23.580292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:32:12.250 [2024-07-26 06:24:23.580331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:12.250 [2024-07-26 06:24:23.580393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:12.250 [2024-07-26 06:24:23.580480] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:12.250 [2024-07-26 06:24:23.580530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:12.250 [2024-07-26 06:24:23.580562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:12.250 [2024-07-26 06:24:23.580608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.509 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.509 [2024-07-26 06:24:23.590100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:12.509 [2024-07-26 06:24:23.590358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-07-26 06:24:23.590400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:32:12.509 [2024-07-26 06:24:23.590445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:12.509 [2024-07-26 06:24:23.590497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:12.509 [2024-07-26 06:24:23.590603] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:12.509 [2024-07-26 06:24:23.590653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:12.509 [2024-07-26 06:24:23.590686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:12.509 [2024-07-26 06:24:23.590733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.509 [2024-07-26 06:24:23.600224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:12.509 [2024-07-26 06:24:23.600501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-07-26 06:24:23.600542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:32:12.509 [2024-07-26 06:24:23.600580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:12.509 [2024-07-26 06:24:23.600630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:12.509 [2024-07-26 06:24:23.600713] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:12.509 [2024-07-26 06:24:23.600750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:12.509 [2024-07-26 06:24:23.600796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:12.509 [2024-07-26 06:24:23.600847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.509 [2024-07-26 06:24:23.610325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:12.509 [2024-07-26 06:24:23.610687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.509 [2024-07-26 06:24:23.610728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:32:12.509 [2024-07-26 06:24:23.610766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:32:12.509 [2024-07-26 06:24:23.610819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:32:12.509 [2024-07-26 06:24:23.610919] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:12.509 [2024-07-26 06:24:23.610969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:12.509 [2024-07-26 06:24:23.611001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:12.509 [2024-07-26 06:24:23.611094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.509 [2024-07-26 06:24:23.612994] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:12.509 [2024-07-26 06:24:23.613055] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:12.509 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:32:12.509 06:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:13.444 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.701 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:32:13.701 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:13.701 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:13.701 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:13.701 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:13.701 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:13.701 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:13.701 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:13.701 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:13.701 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:13.701 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:13.701 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.701 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:13.701 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.701 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.701 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:13.701 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:13.701 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:13.701 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:13.701 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:13.701 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.701 06:24:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.632 [2024-07-26 06:24:25.906100] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:14.632 [2024-07-26 06:24:25.906143] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:14.632 [2024-07-26 06:24:25.906187] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:14.893 [2024-07-26 06:24:26.032671] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:15.151 [2024-07-26 06:24:26.304800] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:15.151 [2024-07-26 06:24:26.304866] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.151 request: 00:32:15.151 { 00:32:15.151 "name": "nvme", 00:32:15.151 "trtype": "tcp", 00:32:15.151 "traddr": "10.0.0.2", 00:32:15.151 "adrfam": "ipv4", 00:32:15.151 "trsvcid": "8009", 00:32:15.151 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:15.151 "wait_for_attach": true, 00:32:15.151 "method": "bdev_nvme_start_discovery", 00:32:15.151 "req_id": 1 00:32:15.151 } 00:32:15.151 Got JSON-RPC error response 00:32:15.151 response: 00:32:15.151 { 00:32:15.151 "code": -17, 00:32:15.151 "message": "File exists" 00:32:15.151 } 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.151 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.152 request: 00:32:15.152 { 00:32:15.152 "name": "nvme_second", 00:32:15.152 "trtype": "tcp", 00:32:15.152 "traddr": "10.0.0.2", 00:32:15.152 "adrfam": "ipv4", 00:32:15.152 "trsvcid": "8009", 00:32:15.152 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:15.152 "wait_for_attach": true, 00:32:15.152 "method": "bdev_nvme_start_discovery", 00:32:15.152 "req_id": 1 00:32:15.152 } 00:32:15.152 Got JSON-RPC error response 00:32:15.152 response: 00:32:15.152 { 00:32:15.152 "code": -17, 00:32:15.152 "message": "File exists" 00:32:15.152 } 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:15.152 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:15.410 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.410 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:15.410 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:15.410 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:15.410 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:15.410 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:15.410 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:15.410 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:15.410 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:15.410 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:15.410 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.410 06:24:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.347 [2024-07-26 06:24:27.516791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.347 [2024-07-26 06:24:27.516891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3400 with addr=10.0.0.2, port=8010 00:32:16.347 [2024-07-26 06:24:27.517026] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:16.347 [2024-07-26 06:24:27.517087] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:16.347 [2024-07-26 06:24:27.517126] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:17.285 [2024-07-26 06:24:28.519344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.285 [2024-07-26 06:24:28.519458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3680 with addr=10.0.0.2, port=8010 00:32:17.285 [2024-07-26 06:24:28.519575] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:17.285 [2024-07-26 06:24:28.519618] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:17.285 [2024-07-26 06:24:28.519659] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:18.218 [2024-07-26 06:24:29.521227] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:18.218 request: 00:32:18.218 { 00:32:18.218 "name": "nvme_second", 00:32:18.218 "trtype": "tcp", 00:32:18.218 "traddr": "10.0.0.2", 00:32:18.218 "adrfam": "ipv4", 00:32:18.218 "trsvcid": "8010", 00:32:18.218 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:18.218 "wait_for_attach": false, 00:32:18.218 "attach_timeout_ms": 3000, 00:32:18.218 "method": "bdev_nvme_start_discovery", 00:32:18.218 "req_id": 1 00:32:18.218 } 00:32:18.218 Got JSON-RPC error response 00:32:18.218 response: 00:32:18.218 { 00:32:18.218 "code": -110, 00:32:18.218 "message": "Connection timed out" 00:32:18.218 } 00:32:18.218 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:18.218 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:18.218 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:18.218 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:18.218 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:18.218 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:18.218 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:18.218 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.218 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:18.218 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.218 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:18.218 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:18.218 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.476 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:18.476 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:18.476 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 272474 00:32:18.476 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:18.476 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:18.476 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:32:18.476 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:18.476 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:32:18.476 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:18.476 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:18.476 rmmod nvme_tcp 00:32:18.476 rmmod nvme_fabrics 00:32:18.476 rmmod nvme_keyring 00:32:18.476 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:18.476 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:32:18.476 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:32:18.476 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 272321 ']' 00:32:18.476 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 272321 00:32:18.476 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 272321 ']' 00:32:18.476 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 272321 00:32:18.476 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:32:18.476 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:18.476 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 272321 00:32:18.477 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:18.477 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:18.477 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 272321' 00:32:18.477 killing process with pid 272321 00:32:18.477 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 272321 00:32:18.477 06:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 272321 00:32:19.414 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:32:19.673 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:32:19.673 06:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:19.673 06:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:19.673 06:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:19.673 06:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:19.673 06:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:19.673 06:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.673 06:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:19.673 06:24:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:22.219 00:32:22.219 real 0m17.944s 00:32:22.219 user 0m27.937s 00:32:22.219 sys 0m3.244s 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.219 ************************************ 00:32:22.219 END TEST nvmf_host_discovery 00:32:22.219 ************************************ 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.219 ************************************ 00:32:22.219 START TEST nvmf_host_multipath_status 00:32:22.219 ************************************ 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:22.219 * Looking for test storage... 00:32:22.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:22.219 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:22.220 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:22.220 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:22.220 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:22.220 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:22.220 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:22.220 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:22.220 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:22.220 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:22.220 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:22.220 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.220 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:22.220 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:22.220 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:32:22.220 06:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:24.125 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:24.126 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:24.126 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:24.126 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:24.126 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:24.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:24.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:32:24.126 00:32:24.126 --- 10.0.0.2 ping statistics --- 00:32:24.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:24.126 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:24.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:24.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:32:24.126 00:32:24.126 --- 10.0.0.1 ping statistics --- 00:32:24.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:24.126 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=276030 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 276030 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 276030 ']' 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:24.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:24.126 06:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:24.126 [2024-07-26 06:24:35.278531] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:24.126 [2024-07-26 06:24:35.278697] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:24.126 EAL: No free 2048 kB hugepages reported on node 1 00:32:24.126 [2024-07-26 06:24:35.421255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:24.385 [2024-07-26 06:24:35.677113] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:24.385 [2024-07-26 06:24:35.677185] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:24.385 [2024-07-26 06:24:35.677228] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:24.385 [2024-07-26 06:24:35.677255] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:24.385 [2024-07-26 06:24:35.677284] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:24.385 [2024-07-26 06:24:35.681100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:24.385 [2024-07-26 06:24:35.681102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:24.951 06:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:24.951 06:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:32:24.951 06:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:24.951 06:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:24.951 06:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:24.951 06:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:24.951 06:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=276030 00:32:24.951 06:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:25.210 [2024-07-26 06:24:36.536434] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:25.468 06:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:25.726 Malloc0 00:32:25.726 06:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:25.984 06:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:26.241 06:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:26.498 [2024-07-26 06:24:37.751087] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:26.498 06:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:26.756 [2024-07-26 06:24:37.991699] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:26.756 06:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=276326 00:32:26.756 06:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:26.756 06:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:26.756 06:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 276326 /var/tmp/bdevperf.sock 00:32:26.756 06:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 276326 ']' 00:32:26.756 06:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:26.756 06:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:26.757 06:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:26.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:26.757 06:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:26.757 06:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:27.690 06:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:27.690 06:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:32:27.690 06:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:27.948 06:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:32:28.518 Nvme0n1 00:32:28.518 06:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:29.083 Nvme0n1 00:32:29.083 06:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:29.083 06:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:31.028 06:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:31.028 06:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:31.287 06:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:31.544 06:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:32.478 06:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:32.478 06:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:32.478 06:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:32.478 06:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:32.735 06:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:32.735 06:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:32.735 06:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:32.735 06:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:32.993 06:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:32.993 06:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:32.993 06:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:32.993 06:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:33.251 06:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:33.251 06:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:33.251 06:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:33.251 06:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:33.509 06:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:33.509 06:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:33.509 06:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:33.509 06:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:33.767 06:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:33.767 06:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:33.767 06:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:33.767 06:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:34.025 06:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:34.025 06:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:34.025 06:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:34.283 06:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:34.541 06:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:32:35.914 06:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:32:35.914 06:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:35.914 06:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:35.914 06:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:35.914 06:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:35.914 06:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:35.914 06:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:35.914 06:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:36.172 06:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:36.172 06:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:36.172 06:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:36.172 06:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:36.430 06:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:36.430 06:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:36.430 06:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:36.430 06:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:36.688 06:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:36.688 06:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:36.688 06:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:36.688 06:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:36.946 06:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:36.947 06:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:36.947 06:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:36.947 06:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:37.204 06:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:37.204 06:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:32:37.204 06:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:37.461 06:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:37.719 06:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:32:38.650 06:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:32:38.650 06:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:38.650 06:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.650 06:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:38.907 06:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:38.907 06:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:38.907 06:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.907 06:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:39.164 06:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:39.164 06:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:39.164 06:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:39.164 06:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:39.421 06:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:39.421 06:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:39.421 06:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:39.421 06:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:39.677 06:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:39.677 06:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:39.677 06:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:39.677 06:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:39.933 06:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:39.933 06:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:39.933 06:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:39.933 06:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:40.190 06:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:40.190 06:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:32:40.190 06:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:40.448 06:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:40.706 06:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:32:41.642 06:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:32:41.642 06:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:41.642 06:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:41.642 06:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:41.899 06:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:41.899 06:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:41.899 06:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:41.899 06:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:42.157 06:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:42.157 06:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:42.157 06:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:42.157 06:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:42.415 06:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:42.415 06:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:42.415 06:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:42.415 06:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:42.673 06:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:42.673 06:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:42.673 06:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:42.673 06:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:42.930 06:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:42.930 06:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:42.930 06:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:42.930 06:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:43.188 06:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:43.188 06:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:32:43.188 06:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:43.446 06:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:43.704 06:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:44.692 06:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:44.692 06:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:44.692 06:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.692 06:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:44.950 06:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:44.950 06:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:44.950 06:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.950 06:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:45.207 06:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:45.207 06:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:45.207 06:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.207 06:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:45.465 06:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:45.465 06:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:45.465 06:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.465 06:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:45.723 06:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:45.723 06:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:45.723 06:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.723 06:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:45.981 06:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:45.981 06:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:45.981 06:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.981 06:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:46.240 06:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:46.240 06:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:46.240 06:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:46.498 06:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:46.757 06:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:32:47.691 06:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:32:47.691 06:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:47.691 06:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.691 06:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:47.949 06:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:47.949 06:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:47.949 06:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.949 06:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:48.208 06:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:48.208 06:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:48.208 06:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:48.208 06:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:48.466 06:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:48.466 06:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:48.466 06:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:48.466 06:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:48.724 06:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:48.724 06:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:48.724 06:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:48.724 06:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:48.982 06:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:48.982 06:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:48.982 06:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:48.982 06:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:49.239 06:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:49.239 06:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:49.496 06:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:49.496 06:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:49.754 06:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:50.012 06:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:50.946 06:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:50.946 06:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:50.946 06:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:50.946 06:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:51.203 06:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:51.203 06:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:51.204 06:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.204 06:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:51.461 06:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:51.461 06:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:51.461 06:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.461 06:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:51.719 06:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:51.719 06:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:51.719 06:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.719 06:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:51.978 06:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:51.978 06:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:51.978 06:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.978 06:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:52.245 06:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:52.245 06:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:52.245 06:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:52.245 06:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:52.507 06:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:52.507 06:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:52.507 06:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:52.764 06:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:53.022 06:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:53.956 06:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:53.956 06:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:53.957 06:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:53.957 06:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:54.215 06:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:54.215 06:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:54.215 06:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.215 06:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:54.473 06:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:54.473 06:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:54.473 06:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.473 06:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:54.731 06:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:54.731 06:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:54.731 06:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.731 06:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:54.990 06:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:54.990 06:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:54.990 06:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.990 06:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:55.248 06:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.248 06:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:55.248 06:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.248 06:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:55.506 06:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.506 06:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:32:55.506 06:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:55.764 06:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:56.022 06:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:32:56.956 06:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:32:56.956 06:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:56.956 06:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.956 06:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:57.214 06:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.214 06:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:57.214 06:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.214 06:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:57.471 06:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.471 06:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:57.471 06:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.471 06:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:57.731 06:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.731 06:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:57.731 06:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.731 06:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:57.989 06:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.989 06:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:57.989 06:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.989 06:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:58.247 06:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.247 06:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:58.247 06:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.247 06:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:58.505 06:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.505 06:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:32:58.505 06:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:58.763 06:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:59.021 06:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:00.395 06:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:00.395 06:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:00.395 06:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.395 06:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:00.395 06:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.395 06:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:00.395 06:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.395 06:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:00.653 06:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:00.653 06:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:00.653 06:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.653 06:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:00.911 06:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.911 06:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:00.911 06:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.911 06:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:01.169 06:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.169 06:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:01.169 06:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.169 06:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:01.427 06:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.427 06:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:01.427 06:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.427 06:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:01.685 06:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:01.685 06:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 276326 00:33:01.685 06:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 276326 ']' 00:33:01.685 06:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 276326 00:33:01.685 06:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:33:01.685 06:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:01.685 06:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 276326 00:33:01.685 06:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:33:01.685 06:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:33:01.685 06:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 276326' 00:33:01.685 killing process with pid 276326 00:33:01.685 06:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 276326 00:33:01.685 06:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 276326 00:33:02.257 Connection closed with partial response: 00:33:02.257 00:33:02.257 00:33:02.516 06:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 276326 00:33:02.516 06:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:02.516 [2024-07-26 06:24:38.091940] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:33:02.516 [2024-07-26 06:24:38.092150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid276326 ] 00:33:02.516 EAL: No free 2048 kB hugepages reported on node 1 00:33:02.516 [2024-07-26 06:24:38.220119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:02.516 [2024-07-26 06:24:38.450748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:02.516 Running I/O for 90 seconds... 00:33:02.516 [2024-07-26 06:24:54.639293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.516 [2024-07-26 06:24:54.639403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:02.516 [2024-07-26 06:24:54.640779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.516 [2024-07-26 06:24:54.640817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:02.516 [2024-07-26 06:24:54.640893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.516 [2024-07-26 06:24:54.640922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:02.516 [2024-07-26 06:24:54.640961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.516 [2024-07-26 06:24:54.640987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:02.516 [2024-07-26 06:24:54.641025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.516 [2024-07-26 06:24:54.641051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:02.516 [2024-07-26 06:24:54.641097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.516 [2024-07-26 06:24:54.641124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:02.516 [2024-07-26 06:24:54.641161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.516 [2024-07-26 06:24:54.641187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:02.516 [2024-07-26 06:24:54.641225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.516 [2024-07-26 06:24:54.641250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:02.516 [2024-07-26 06:24:54.641287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.516 [2024-07-26 06:24:54.641312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:02.516 [2024-07-26 06:24:54.641350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.516 [2024-07-26 06:24:54.641390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:02.516 [2024-07-26 06:24:54.641427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.516 [2024-07-26 06:24:54.641461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:02.516 [2024-07-26 06:24:54.641498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.516 [2024-07-26 06:24:54.641522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:02.516 [2024-07-26 06:24:54.641575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.516 [2024-07-26 06:24:54.641599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:02.516 [2024-07-26 06:24:54.641636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.516 [2024-07-26 06:24:54.641661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:02.516 [2024-07-26 06:24:54.641698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.516 [2024-07-26 06:24:54.641723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:02.516 [2024-07-26 06:24:54.641759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.516 [2024-07-26 06:24:54.641784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:02.516 [2024-07-26 06:24:54.641820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.516 [2024-07-26 06:24:54.641845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:02.516 [2024-07-26 06:24:54.641896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.516 [2024-07-26 06:24:54.641920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:02.516 [2024-07-26 06:24:54.641955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.516 [2024-07-26 06:24:54.641979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:02.516 [2024-07-26 06:24:54.642013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.516 [2024-07-26 06:24:54.642051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:02.516 [2024-07-26 06:24:54.642097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.516 [2024-07-26 06:24:54.642138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:02.516 [2024-07-26 06:24:54.642176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.516 [2024-07-26 06:24:54.642201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:02.516 [2024-07-26 06:24:54.642239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.516 [2024-07-26 06:24:54.642264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:02.516 [2024-07-26 06:24:54.642306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.516 [2024-07-26 06:24:54.642331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.642369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.642394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.642447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.642471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.642507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.642546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.642581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.642605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.642639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.642662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.642698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.642722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.642756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.642780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.642831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.642856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.642894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.642919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.643186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.643218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.643264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.643290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.643336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.643363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.643404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.643430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.643486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.643511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.643550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.643575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.643614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.643638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.643677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.643702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.643740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.643780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.643818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.643842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.643879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.643903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.643940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.643964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.644003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.517 [2024-07-26 06:24:54.644027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.644089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.517 [2024-07-26 06:24:54.644115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.644155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.644185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.644243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.644269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.644308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.644333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.644389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.644414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.644454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.644478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.644516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.644557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.644595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.644620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.644656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.644680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.644718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.644742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.644780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.644804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.644859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.644883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.644923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.644947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.644986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.645014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.645077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.645104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.645161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.517 [2024-07-26 06:24:54.645186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:02.517 [2024-07-26 06:24:54.645225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.645249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.645288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.645312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.645351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.645375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.645429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.645453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.645648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.645678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.645725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.645751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.645793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.645835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.645878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.645903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.645944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.645984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.646024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.646072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.646123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.646149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.646191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.646216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.646259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.646285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.646327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.646352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.646408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.646449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.646491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.646516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.646556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.646581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.646622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.646647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.646687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.646712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.646752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.646792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.646832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.646856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.646895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.646918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.646965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.646990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.647030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.647078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.647121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.647146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.647187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.647211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.647252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.647277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.647317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.647342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.647398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.647422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.647462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.647486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.647525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.647549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.647589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.647613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.647652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.647676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.647716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.647740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.647780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.647808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.647848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.647872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.647999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.518 [2024-07-26 06:24:54.648026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.648094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.518 [2024-07-26 06:24:54.648121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.518 [2024-07-26 06:24:54.648165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.518 [2024-07-26 06:24:54.648190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:24:54.648233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.519 [2024-07-26 06:24:54.648258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:24:54.648300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.519 [2024-07-26 06:24:54.648325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:24:54.648382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.519 [2024-07-26 06:24:54.648407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:24:54.648449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:53880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.519 [2024-07-26 06:24:54.648473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:24:54.648516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:53888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.519 [2024-07-26 06:24:54.648540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:24:54.648582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:53896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.519 [2024-07-26 06:24:54.648606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:24:54.648647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.519 [2024-07-26 06:24:54.648671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:24:54.648713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.519 [2024-07-26 06:24:54.648742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:24:54.648785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:53920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.519 [2024-07-26 06:24:54.648809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:24:54.648850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.519 [2024-07-26 06:24:54.648881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:24:54.648923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:53936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.519 [2024-07-26 06:24:54.648947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:24:54.648988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.519 [2024-07-26 06:24:54.649012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:24:54.649054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:53952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.519 [2024-07-26 06:24:54.649102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:25:10.272213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.519 [2024-07-26 06:25:10.272314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:25:10.272438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.519 [2024-07-26 06:25:10.272467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:25:10.272506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.519 [2024-07-26 06:25:10.272532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:25:10.272568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.519 [2024-07-26 06:25:10.272593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:25:10.272628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.519 [2024-07-26 06:25:10.272653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:25:10.272688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.519 [2024-07-26 06:25:10.272713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:25:10.272748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.519 [2024-07-26 06:25:10.272772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:25:10.272820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.519 [2024-07-26 06:25:10.272845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:25:10.272880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.519 [2024-07-26 06:25:10.272905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:25:10.272958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.519 [2024-07-26 06:25:10.272983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:25:10.273020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.519 [2024-07-26 06:25:10.273046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:25:10.273096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.519 [2024-07-26 06:25:10.273123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:25:10.273160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.519 [2024-07-26 06:25:10.273185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:25:10.273222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.519 [2024-07-26 06:25:10.273247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:25:10.273284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.519 [2024-07-26 06:25:10.273309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:25:10.273359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.519 [2024-07-26 06:25:10.273409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:25:10.273450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.519 [2024-07-26 06:25:10.273475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:25:10.273510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.519 [2024-07-26 06:25:10.273536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:25:10.273570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.519 [2024-07-26 06:25:10.273595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:25:10.273639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.519 [2024-07-26 06:25:10.273665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:25:10.273700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.519 [2024-07-26 06:25:10.273724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:25:10.273760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.519 [2024-07-26 06:25:10.273785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:25:10.273819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.519 [2024-07-26 06:25:10.273844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:25:10.273878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.519 [2024-07-26 06:25:10.273902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:02.519 [2024-07-26 06:25:10.273936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.519 [2024-07-26 06:25:10.273960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.273996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.520 [2024-07-26 06:25:10.274021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.274080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.520 [2024-07-26 06:25:10.274108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.274144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.520 [2024-07-26 06:25:10.274170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.274205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.520 [2024-07-26 06:25:10.274230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.274266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.520 [2024-07-26 06:25:10.274292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.274327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.520 [2024-07-26 06:25:10.274352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.274404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.520 [2024-07-26 06:25:10.274433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.274483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.520 [2024-07-26 06:25:10.274510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.274546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.520 [2024-07-26 06:25:10.274571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.274605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.520 [2024-07-26 06:25:10.274629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.274664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.520 [2024-07-26 06:25:10.274688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.274725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.520 [2024-07-26 06:25:10.274750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.277416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.520 [2024-07-26 06:25:10.277455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.277500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.520 [2024-07-26 06:25:10.277529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.277566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.520 [2024-07-26 06:25:10.277592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.277629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.520 [2024-07-26 06:25:10.277670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.277709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.520 [2024-07-26 06:25:10.277745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.277784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.520 [2024-07-26 06:25:10.277809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.277844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.520 [2024-07-26 06:25:10.277873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.277910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.520 [2024-07-26 06:25:10.277934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.277969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.520 [2024-07-26 06:25:10.277993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.278054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.520 [2024-07-26 06:25:10.278088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.278126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.520 [2024-07-26 06:25:10.278151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.279652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.520 [2024-07-26 06:25:10.279699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.279744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.520 [2024-07-26 06:25:10.279770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.279806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.520 [2024-07-26 06:25:10.279831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.279867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.520 [2024-07-26 06:25:10.279904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.279954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.520 [2024-07-26 06:25:10.279980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.280015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.520 [2024-07-26 06:25:10.280055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.280102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.520 [2024-07-26 06:25:10.280127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.280163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.520 [2024-07-26 06:25:10.280193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:02.520 [2024-07-26 06:25:10.280230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.521 [2024-07-26 06:25:10.280254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:02.521 [2024-07-26 06:25:10.280290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.521 [2024-07-26 06:25:10.280315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:02.521 [2024-07-26 06:25:10.280366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.521 [2024-07-26 06:25:10.280391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:02.521 [2024-07-26 06:25:10.280426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.521 [2024-07-26 06:25:10.280450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:02.521 [2024-07-26 06:25:10.280485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:02.521 [2024-07-26 06:25:10.280509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:02.521 Received shutdown signal, test time was about 32.397399 seconds 00:33:02.521 00:33:02.521 Latency(us) 00:33:02.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.521 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:02.521 Verification LBA range: start 0x0 length 0x4000 00:33:02.521 Nvme0n1 : 32.40 5825.86 22.76 0.00 0.00 21935.09 388.36 4026531.84 00:33:02.521 =================================================================================================================== 00:33:02.521 Total : 5825.86 22.76 0.00 0.00 21935.09 388.36 4026531.84 00:33:02.521 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:33:02.521 06:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:03.091 06:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:03.091 06:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:03.091 06:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:03.091 06:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:03.091 06:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:33:03.091 06:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:03.091 06:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:33:03.091 06:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:03.091 06:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:03.091 rmmod nvme_tcp 00:33:03.091 rmmod nvme_fabrics 00:33:03.091 rmmod nvme_keyring 00:33:03.091 06:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:03.091 06:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:33:03.091 06:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:33:03.091 06:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 276030 ']' 00:33:03.091 06:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 276030 00:33:03.091 06:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 276030 ']' 00:33:03.091 06:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 276030 00:33:03.091 06:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:33:03.091 06:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:03.091 06:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 276030 00:33:03.091 06:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:03.091 06:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:03.091 06:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 276030' 00:33:03.091 killing process with pid 276030 00:33:03.091 06:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 276030 00:33:03.091 06:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 276030 00:33:04.466 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:33:04.466 06:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:04.466 06:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:04.466 06:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:04.466 06:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:04.466 06:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:04.466 06:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:04.466 06:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:04.466 06:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:06.996 00:33:06.996 real 0m44.678s 00:33:06.996 user 2m12.658s 00:33:06.996 sys 0m10.206s 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:06.996 ************************************ 00:33:06.996 END TEST nvmf_host_multipath_status 00:33:06.996 ************************************ 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.996 ************************************ 00:33:06.996 START TEST nvmf_discovery_remove_ifc 00:33:06.996 ************************************ 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:06.996 * Looking for test storage... 00:33:06.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:33:06.996 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:06.997 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:06.997 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:06.997 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:06.997 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:06.997 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:06.997 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:06.997 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:06.997 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:06.997 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:06.997 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:06.997 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:06.997 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:06.997 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:06.997 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:06.997 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:06.997 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:06.997 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:06.997 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:06.997 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:06.997 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:06.997 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:06.997 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:06.997 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:06.997 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:06.997 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:33:06.997 06:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:08.898 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:08.899 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:08.899 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:08.899 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:08.899 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:08.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:08.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:33:08.899 00:33:08.899 --- 10.0.0.2 ping statistics --- 00:33:08.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:08.899 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:08.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:08.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:33:08.899 00:33:08.899 --- 10.0.0.1 ping statistics --- 00:33:08.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:08.899 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=282870 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 282870 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 282870 ']' 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:08.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:08.899 06:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:08.899 [2024-07-26 06:25:20.043264] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:33:08.899 [2024-07-26 06:25:20.043433] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:08.899 EAL: No free 2048 kB hugepages reported on node 1 00:33:08.899 [2024-07-26 06:25:20.184923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.159 [2024-07-26 06:25:20.409348] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:09.159 [2024-07-26 06:25:20.409429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:09.159 [2024-07-26 06:25:20.409466] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:09.159 [2024-07-26 06:25:20.409487] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:09.159 [2024-07-26 06:25:20.409505] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:09.159 [2024-07-26 06:25:20.409544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:09.726 06:25:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:09.726 06:25:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:33:09.726 06:25:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:09.726 06:25:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:09.726 06:25:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:09.726 06:25:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:09.726 06:25:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:09.726 06:25:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.726 06:25:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:09.984 [2024-07-26 06:25:21.062651] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:09.984 [2024-07-26 06:25:21.070878] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:09.984 null0 00:33:09.984 [2024-07-26 06:25:21.102759] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:09.984 06:25:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.984 06:25:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=283046 00:33:09.984 06:25:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:09.984 06:25:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 283046 /tmp/host.sock 00:33:09.984 06:25:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 283046 ']' 00:33:09.984 06:25:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:33:09.984 06:25:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:09.984 06:25:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:09.984 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:09.984 06:25:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:09.984 06:25:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:09.984 [2024-07-26 06:25:21.207830] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:33:09.984 [2024-07-26 06:25:21.207980] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid283046 ] 00:33:09.984 EAL: No free 2048 kB hugepages reported on node 1 00:33:10.242 [2024-07-26 06:25:21.337833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.501 [2024-07-26 06:25:21.589371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.067 06:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:11.067 06:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:33:11.067 06:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:11.067 06:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:11.067 06:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.067 06:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:11.067 06:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.067 06:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:11.067 06:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.067 06:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:11.325 06:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.325 06:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:11.325 06:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.325 06:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:12.259 [2024-07-26 06:25:23.586283] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:12.259 [2024-07-26 06:25:23.586325] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:12.259 [2024-07-26 06:25:23.586389] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:12.516 [2024-07-26 06:25:23.672687] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:12.516 [2024-07-26 06:25:23.736321] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:12.516 [2024-07-26 06:25:23.736429] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:12.516 [2024-07-26 06:25:23.736526] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:12.516 [2024-07-26 06:25:23.736576] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:12.516 [2024-07-26 06:25:23.736628] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:12.516 06:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.516 06:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:12.516 06:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:12.516 06:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:12.516 06:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:12.516 06:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.516 06:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:12.516 06:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:12.516 06:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:12.516 [2024-07-26 06:25:23.743463] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150001f2780 was disconnected and freed. delete nvme_qpair. 00:33:12.516 06:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.516 06:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:12.516 06:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:12.516 06:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:12.516 06:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:12.516 06:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:12.516 06:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:12.516 06:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:12.516 06:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.516 06:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:12.516 06:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:12.516 06:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:12.774 06:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.774 06:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:12.774 06:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:13.707 06:25:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:13.707 06:25:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:13.707 06:25:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:13.707 06:25:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.707 06:25:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:13.707 06:25:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:13.707 06:25:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:13.707 06:25:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.707 06:25:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:13.707 06:25:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:14.639 06:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:14.639 06:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:14.639 06:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:14.639 06:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.639 06:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:14.639 06:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:14.639 06:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:14.639 06:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.639 06:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:14.639 06:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:16.013 06:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:16.013 06:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:16.013 06:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:16.013 06:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.013 06:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:16.013 06:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:16.013 06:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:16.013 06:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.013 06:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:16.013 06:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:16.950 06:25:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:16.950 06:25:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:16.950 06:25:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:16.950 06:25:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.950 06:25:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:16.950 06:25:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:16.950 06:25:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:16.950 06:25:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.950 06:25:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:16.950 06:25:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:17.891 06:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:17.891 06:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:17.891 06:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:17.891 06:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.891 06:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:17.891 06:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:17.891 06:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:17.891 06:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.891 06:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:17.891 06:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:17.891 [2024-07-26 06:25:29.177678] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:17.891 [2024-07-26 06:25:29.177793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.891 [2024-07-26 06:25:29.177829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.891 [2024-07-26 06:25:29.177863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.891 [2024-07-26 06:25:29.177887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.891 [2024-07-26 06:25:29.177911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.891 [2024-07-26 06:25:29.177936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.891 [2024-07-26 06:25:29.177959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.891 [2024-07-26 06:25:29.177982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.891 [2024-07-26 06:25:29.178006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.891 [2024-07-26 06:25:29.178028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.891 [2024-07-26 06:25:29.178050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:33:17.891 [2024-07-26 06:25:29.187686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:17.891 [2024-07-26 06:25:29.197750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:18.829 06:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:18.829 06:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:18.829 06:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:18.829 06:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.829 06:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:18.829 06:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:18.829 06:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:19.088 [2024-07-26 06:25:30.254128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:19.088 [2024-07-26 06:25:30.254241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:19.089 [2024-07-26 06:25:30.254284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:33:19.089 [2024-07-26 06:25:30.254360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:19.089 [2024-07-26 06:25:30.255160] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.089 [2024-07-26 06:25:30.255246] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:19.089 [2024-07-26 06:25:30.255273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:19.089 [2024-07-26 06:25:30.255297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:19.089 [2024-07-26 06:25:30.255348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.089 [2024-07-26 06:25:30.255391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:19.089 06:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.089 06:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:19.089 06:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:20.024 [2024-07-26 06:25:31.257932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:20.024 [2024-07-26 06:25:31.258022] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:20.024 [2024-07-26 06:25:31.258044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:20.024 [2024-07-26 06:25:31.258087] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:33:20.024 [2024-07-26 06:25:31.258139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.024 [2024-07-26 06:25:31.258212] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:20.024 [2024-07-26 06:25:31.258296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:20.024 [2024-07-26 06:25:31.258327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-07-26 06:25:31.258357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:20.024 [2024-07-26 06:25:31.258394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-07-26 06:25:31.258415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:20.024 [2024-07-26 06:25:31.258434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-07-26 06:25:31.258455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:20.024 [2024-07-26 06:25:31.258474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-07-26 06:25:31.258495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:20.024 [2024-07-26 06:25:31.258514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-07-26 06:25:31.258534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:33:20.024 [2024-07-26 06:25:31.258634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:20.024 [2024-07-26 06:25:31.259618] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:20.024 [2024-07-26 06:25:31.259648] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:33:20.024 06:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:20.024 06:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:20.024 06:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:20.024 06:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.024 06:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:20.024 06:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:20.024 06:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:20.024 06:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.024 06:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:20.024 06:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:20.024 06:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:20.024 06:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:20.283 06:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:20.283 06:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:20.283 06:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:20.283 06:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.283 06:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:20.283 06:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:20.283 06:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:20.283 06:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.283 06:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:20.283 06:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:21.220 06:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:21.220 06:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:21.220 06:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:21.220 06:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.220 06:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:21.220 06:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:21.220 06:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:21.220 06:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.220 06:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:21.220 06:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:22.158 [2024-07-26 06:25:33.312257] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:22.158 [2024-07-26 06:25:33.312299] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:22.158 [2024-07-26 06:25:33.312342] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:22.158 [2024-07-26 06:25:33.398666] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:22.158 06:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:22.158 06:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:22.158 06:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:22.158 06:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.158 06:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:22.158 06:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:22.158 06:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:22.158 [2024-07-26 06:25:33.462613] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:22.158 [2024-07-26 06:25:33.462689] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:22.158 [2024-07-26 06:25:33.462761] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:22.158 [2024-07-26 06:25:33.462795] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:22.158 [2024-07-26 06:25:33.462818] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:22.158 06:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.158 [2024-07-26 06:25:33.469816] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150001f2f00 was disconnected and freed. delete nvme_qpair. 00:33:22.158 06:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:22.158 06:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:23.533 06:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:23.533 06:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:23.533 06:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:23.533 06:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.533 06:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:23.533 06:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:23.533 06:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:23.533 06:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.533 06:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:23.533 06:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:23.533 06:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 283046 00:33:23.533 06:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 283046 ']' 00:33:23.533 06:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 283046 00:33:23.533 06:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:33:23.533 06:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:23.533 06:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 283046 00:33:23.533 06:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:23.533 06:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:23.533 06:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 283046' 00:33:23.533 killing process with pid 283046 00:33:23.533 06:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 283046 00:33:23.533 06:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 283046 00:33:24.472 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:33:24.472 06:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:24.472 06:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:24.472 06:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:33:24.473 06:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:24.473 06:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:33:24.473 06:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:24.473 06:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:24.473 rmmod nvme_tcp 00:33:24.473 rmmod nvme_fabrics 00:33:24.473 rmmod nvme_keyring 00:33:24.473 06:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:24.473 06:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:33:24.473 06:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:33:24.473 06:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 282870 ']' 00:33:24.473 06:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 282870 00:33:24.473 06:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 282870 ']' 00:33:24.473 06:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 282870 00:33:24.473 06:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:33:24.473 06:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:24.473 06:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 282870 00:33:24.473 06:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:24.473 06:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:24.473 06:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 282870' 00:33:24.473 killing process with pid 282870 00:33:24.473 06:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 282870 00:33:24.473 06:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 282870 00:33:25.854 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:33:25.854 06:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:25.854 06:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:25.854 06:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:25.854 06:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:25.854 06:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:25.855 06:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.855 06:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:25.855 06:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:27.761 00:33:27.761 real 0m21.107s 00:33:27.761 user 0m31.147s 00:33:27.761 sys 0m3.258s 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:27.761 ************************************ 00:33:27.761 END TEST nvmf_discovery_remove_ifc 00:33:27.761 ************************************ 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.761 ************************************ 00:33:27.761 START TEST nvmf_identify_kernel_target 00:33:27.761 ************************************ 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:27.761 * Looking for test storage... 00:33:27.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:27.761 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.762 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.762 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.762 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:27.762 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.762 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:33:27.762 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:27.762 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:27.762 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:27.762 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:27.762 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:27.762 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:27.762 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:27.762 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:27.762 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:27.762 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:27.762 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:27.762 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:27.762 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:27.762 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:27.762 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.762 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:27.762 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.762 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:27.762 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:27.762 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:33:27.762 06:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:29.693 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:29.693 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:29.693 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:29.694 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:29.694 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:29.694 06:25:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:29.694 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:29.694 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:29.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:29.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:33:29.953 00:33:29.953 --- 10.0.0.2 ping statistics --- 00:33:29.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:29.953 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:29.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:29.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:33:29.953 00:33:29.953 --- 10.0.0.1 ping statistics --- 00:33:29.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:29.953 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:29.953 06:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:30.889 Waiting for block devices as requested 00:33:30.889 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:31.148 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:31.148 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:31.406 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:31.406 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:31.406 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:31.407 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:31.665 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:31.665 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:31.665 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:31.665 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:31.923 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:31.923 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:31.923 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:31.923 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:31.923 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:32.186 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:32.186 No valid GPT data, bailing 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:32.186 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:32.448 00:33:32.448 Discovery Log Number of Records 2, Generation counter 2 00:33:32.448 =====Discovery Log Entry 0====== 00:33:32.448 trtype: tcp 00:33:32.448 adrfam: ipv4 00:33:32.448 subtype: current discovery subsystem 00:33:32.448 treq: not specified, sq flow control disable supported 00:33:32.448 portid: 1 00:33:32.448 trsvcid: 4420 00:33:32.448 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:32.448 traddr: 10.0.0.1 00:33:32.448 eflags: none 00:33:32.448 sectype: none 00:33:32.448 =====Discovery Log Entry 1====== 00:33:32.448 trtype: tcp 00:33:32.448 adrfam: ipv4 00:33:32.448 subtype: nvme subsystem 00:33:32.448 treq: not specified, sq flow control disable supported 00:33:32.448 portid: 1 00:33:32.448 trsvcid: 4420 00:33:32.448 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:32.448 traddr: 10.0.0.1 00:33:32.448 eflags: none 00:33:32.448 sectype: none 00:33:32.448 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:32.448 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:32.448 EAL: No free 2048 kB hugepages reported on node 1 00:33:32.448 ===================================================== 00:33:32.448 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:32.448 ===================================================== 00:33:32.448 Controller Capabilities/Features 00:33:32.448 ================================ 00:33:32.448 Vendor ID: 0000 00:33:32.448 Subsystem Vendor ID: 0000 00:33:32.448 Serial Number: 1dd8feef17d32b12e61f 00:33:32.448 Model Number: Linux 00:33:32.448 Firmware Version: 6.7.0-68 00:33:32.448 Recommended Arb Burst: 0 00:33:32.448 IEEE OUI Identifier: 00 00 00 00:33:32.448 Multi-path I/O 00:33:32.448 May have multiple subsystem ports: No 00:33:32.448 May have multiple controllers: No 00:33:32.448 Associated with SR-IOV VF: No 00:33:32.448 Max Data Transfer Size: Unlimited 00:33:32.448 Max Number of Namespaces: 0 00:33:32.448 Max Number of I/O Queues: 1024 00:33:32.448 NVMe Specification Version (VS): 1.3 00:33:32.448 NVMe Specification Version (Identify): 1.3 00:33:32.448 Maximum Queue Entries: 1024 00:33:32.448 Contiguous Queues Required: No 00:33:32.448 Arbitration Mechanisms Supported 00:33:32.448 Weighted Round Robin: Not Supported 00:33:32.448 Vendor Specific: Not Supported 00:33:32.448 Reset Timeout: 7500 ms 00:33:32.448 Doorbell Stride: 4 bytes 00:33:32.448 NVM Subsystem Reset: Not Supported 00:33:32.448 Command Sets Supported 00:33:32.448 NVM Command Set: Supported 00:33:32.448 Boot Partition: Not Supported 00:33:32.448 Memory Page Size Minimum: 4096 bytes 00:33:32.448 Memory Page Size Maximum: 4096 bytes 00:33:32.448 Persistent Memory Region: Not Supported 00:33:32.448 Optional Asynchronous Events Supported 00:33:32.448 Namespace Attribute Notices: Not Supported 00:33:32.448 Firmware Activation Notices: Not Supported 00:33:32.448 ANA Change Notices: Not Supported 00:33:32.448 PLE Aggregate Log Change Notices: Not Supported 00:33:32.448 LBA Status Info Alert Notices: Not Supported 00:33:32.448 EGE Aggregate Log Change Notices: Not Supported 00:33:32.448 Normal NVM Subsystem Shutdown event: Not Supported 00:33:32.448 Zone Descriptor Change Notices: Not Supported 00:33:32.448 Discovery Log Change Notices: Supported 00:33:32.448 Controller Attributes 00:33:32.448 128-bit Host Identifier: Not Supported 00:33:32.448 Non-Operational Permissive Mode: Not Supported 00:33:32.448 NVM Sets: Not Supported 00:33:32.448 Read Recovery Levels: Not Supported 00:33:32.448 Endurance Groups: Not Supported 00:33:32.448 Predictable Latency Mode: Not Supported 00:33:32.448 Traffic Based Keep ALive: Not Supported 00:33:32.448 Namespace Granularity: Not Supported 00:33:32.448 SQ Associations: Not Supported 00:33:32.448 UUID List: Not Supported 00:33:32.448 Multi-Domain Subsystem: Not Supported 00:33:32.448 Fixed Capacity Management: Not Supported 00:33:32.448 Variable Capacity Management: Not Supported 00:33:32.448 Delete Endurance Group: Not Supported 00:33:32.448 Delete NVM Set: Not Supported 00:33:32.448 Extended LBA Formats Supported: Not Supported 00:33:32.448 Flexible Data Placement Supported: Not Supported 00:33:32.448 00:33:32.448 Controller Memory Buffer Support 00:33:32.448 ================================ 00:33:32.448 Supported: No 00:33:32.448 00:33:32.448 Persistent Memory Region Support 00:33:32.448 ================================ 00:33:32.448 Supported: No 00:33:32.448 00:33:32.448 Admin Command Set Attributes 00:33:32.448 ============================ 00:33:32.448 Security Send/Receive: Not Supported 00:33:32.448 Format NVM: Not Supported 00:33:32.448 Firmware Activate/Download: Not Supported 00:33:32.448 Namespace Management: Not Supported 00:33:32.448 Device Self-Test: Not Supported 00:33:32.448 Directives: Not Supported 00:33:32.448 NVMe-MI: Not Supported 00:33:32.448 Virtualization Management: Not Supported 00:33:32.448 Doorbell Buffer Config: Not Supported 00:33:32.448 Get LBA Status Capability: Not Supported 00:33:32.448 Command & Feature Lockdown Capability: Not Supported 00:33:32.448 Abort Command Limit: 1 00:33:32.448 Async Event Request Limit: 1 00:33:32.448 Number of Firmware Slots: N/A 00:33:32.448 Firmware Slot 1 Read-Only: N/A 00:33:32.448 Firmware Activation Without Reset: N/A 00:33:32.448 Multiple Update Detection Support: N/A 00:33:32.448 Firmware Update Granularity: No Information Provided 00:33:32.448 Per-Namespace SMART Log: No 00:33:32.448 Asymmetric Namespace Access Log Page: Not Supported 00:33:32.448 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:32.448 Command Effects Log Page: Not Supported 00:33:32.448 Get Log Page Extended Data: Supported 00:33:32.448 Telemetry Log Pages: Not Supported 00:33:32.448 Persistent Event Log Pages: Not Supported 00:33:32.448 Supported Log Pages Log Page: May Support 00:33:32.448 Commands Supported & Effects Log Page: Not Supported 00:33:32.448 Feature Identifiers & Effects Log Page:May Support 00:33:32.448 NVMe-MI Commands & Effects Log Page: May Support 00:33:32.448 Data Area 4 for Telemetry Log: Not Supported 00:33:32.448 Error Log Page Entries Supported: 1 00:33:32.448 Keep Alive: Not Supported 00:33:32.448 00:33:32.448 NVM Command Set Attributes 00:33:32.448 ========================== 00:33:32.448 Submission Queue Entry Size 00:33:32.448 Max: 1 00:33:32.448 Min: 1 00:33:32.448 Completion Queue Entry Size 00:33:32.449 Max: 1 00:33:32.449 Min: 1 00:33:32.449 Number of Namespaces: 0 00:33:32.449 Compare Command: Not Supported 00:33:32.449 Write Uncorrectable Command: Not Supported 00:33:32.449 Dataset Management Command: Not Supported 00:33:32.449 Write Zeroes Command: Not Supported 00:33:32.449 Set Features Save Field: Not Supported 00:33:32.449 Reservations: Not Supported 00:33:32.449 Timestamp: Not Supported 00:33:32.449 Copy: Not Supported 00:33:32.449 Volatile Write Cache: Not Present 00:33:32.449 Atomic Write Unit (Normal): 1 00:33:32.449 Atomic Write Unit (PFail): 1 00:33:32.449 Atomic Compare & Write Unit: 1 00:33:32.449 Fused Compare & Write: Not Supported 00:33:32.449 Scatter-Gather List 00:33:32.449 SGL Command Set: Supported 00:33:32.449 SGL Keyed: Not Supported 00:33:32.449 SGL Bit Bucket Descriptor: Not Supported 00:33:32.449 SGL Metadata Pointer: Not Supported 00:33:32.449 Oversized SGL: Not Supported 00:33:32.449 SGL Metadata Address: Not Supported 00:33:32.449 SGL Offset: Supported 00:33:32.449 Transport SGL Data Block: Not Supported 00:33:32.449 Replay Protected Memory Block: Not Supported 00:33:32.449 00:33:32.449 Firmware Slot Information 00:33:32.449 ========================= 00:33:32.449 Active slot: 0 00:33:32.449 00:33:32.449 00:33:32.449 Error Log 00:33:32.449 ========= 00:33:32.449 00:33:32.449 Active Namespaces 00:33:32.449 ================= 00:33:32.449 Discovery Log Page 00:33:32.449 ================== 00:33:32.449 Generation Counter: 2 00:33:32.449 Number of Records: 2 00:33:32.449 Record Format: 0 00:33:32.449 00:33:32.449 Discovery Log Entry 0 00:33:32.449 ---------------------- 00:33:32.449 Transport Type: 3 (TCP) 00:33:32.449 Address Family: 1 (IPv4) 00:33:32.449 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:32.449 Entry Flags: 00:33:32.449 Duplicate Returned Information: 0 00:33:32.449 Explicit Persistent Connection Support for Discovery: 0 00:33:32.449 Transport Requirements: 00:33:32.449 Secure Channel: Not Specified 00:33:32.449 Port ID: 1 (0x0001) 00:33:32.449 Controller ID: 65535 (0xffff) 00:33:32.449 Admin Max SQ Size: 32 00:33:32.449 Transport Service Identifier: 4420 00:33:32.449 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:32.449 Transport Address: 10.0.0.1 00:33:32.449 Discovery Log Entry 1 00:33:32.449 ---------------------- 00:33:32.449 Transport Type: 3 (TCP) 00:33:32.449 Address Family: 1 (IPv4) 00:33:32.449 Subsystem Type: 2 (NVM Subsystem) 00:33:32.449 Entry Flags: 00:33:32.449 Duplicate Returned Information: 0 00:33:32.449 Explicit Persistent Connection Support for Discovery: 0 00:33:32.449 Transport Requirements: 00:33:32.449 Secure Channel: Not Specified 00:33:32.449 Port ID: 1 (0x0001) 00:33:32.449 Controller ID: 65535 (0xffff) 00:33:32.449 Admin Max SQ Size: 32 00:33:32.449 Transport Service Identifier: 4420 00:33:32.449 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:32.449 Transport Address: 10.0.0.1 00:33:32.449 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:32.709 EAL: No free 2048 kB hugepages reported on node 1 00:33:32.709 get_feature(0x01) failed 00:33:32.709 get_feature(0x02) failed 00:33:32.709 get_feature(0x04) failed 00:33:32.709 ===================================================== 00:33:32.709 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:32.709 ===================================================== 00:33:32.710 Controller Capabilities/Features 00:33:32.710 ================================ 00:33:32.710 Vendor ID: 0000 00:33:32.710 Subsystem Vendor ID: 0000 00:33:32.710 Serial Number: 4291cf4a214a121f4b99 00:33:32.710 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:32.710 Firmware Version: 6.7.0-68 00:33:32.710 Recommended Arb Burst: 6 00:33:32.710 IEEE OUI Identifier: 00 00 00 00:33:32.710 Multi-path I/O 00:33:32.710 May have multiple subsystem ports: Yes 00:33:32.710 May have multiple controllers: Yes 00:33:32.710 Associated with SR-IOV VF: No 00:33:32.710 Max Data Transfer Size: Unlimited 00:33:32.710 Max Number of Namespaces: 1024 00:33:32.710 Max Number of I/O Queues: 128 00:33:32.710 NVMe Specification Version (VS): 1.3 00:33:32.710 NVMe Specification Version (Identify): 1.3 00:33:32.710 Maximum Queue Entries: 1024 00:33:32.710 Contiguous Queues Required: No 00:33:32.710 Arbitration Mechanisms Supported 00:33:32.710 Weighted Round Robin: Not Supported 00:33:32.710 Vendor Specific: Not Supported 00:33:32.710 Reset Timeout: 7500 ms 00:33:32.710 Doorbell Stride: 4 bytes 00:33:32.710 NVM Subsystem Reset: Not Supported 00:33:32.710 Command Sets Supported 00:33:32.710 NVM Command Set: Supported 00:33:32.710 Boot Partition: Not Supported 00:33:32.710 Memory Page Size Minimum: 4096 bytes 00:33:32.710 Memory Page Size Maximum: 4096 bytes 00:33:32.710 Persistent Memory Region: Not Supported 00:33:32.710 Optional Asynchronous Events Supported 00:33:32.710 Namespace Attribute Notices: Supported 00:33:32.710 Firmware Activation Notices: Not Supported 00:33:32.710 ANA Change Notices: Supported 00:33:32.710 PLE Aggregate Log Change Notices: Not Supported 00:33:32.710 LBA Status Info Alert Notices: Not Supported 00:33:32.710 EGE Aggregate Log Change Notices: Not Supported 00:33:32.710 Normal NVM Subsystem Shutdown event: Not Supported 00:33:32.710 Zone Descriptor Change Notices: Not Supported 00:33:32.710 Discovery Log Change Notices: Not Supported 00:33:32.710 Controller Attributes 00:33:32.710 128-bit Host Identifier: Supported 00:33:32.710 Non-Operational Permissive Mode: Not Supported 00:33:32.710 NVM Sets: Not Supported 00:33:32.710 Read Recovery Levels: Not Supported 00:33:32.710 Endurance Groups: Not Supported 00:33:32.710 Predictable Latency Mode: Not Supported 00:33:32.710 Traffic Based Keep ALive: Supported 00:33:32.710 Namespace Granularity: Not Supported 00:33:32.710 SQ Associations: Not Supported 00:33:32.710 UUID List: Not Supported 00:33:32.710 Multi-Domain Subsystem: Not Supported 00:33:32.710 Fixed Capacity Management: Not Supported 00:33:32.710 Variable Capacity Management: Not Supported 00:33:32.710 Delete Endurance Group: Not Supported 00:33:32.710 Delete NVM Set: Not Supported 00:33:32.710 Extended LBA Formats Supported: Not Supported 00:33:32.710 Flexible Data Placement Supported: Not Supported 00:33:32.710 00:33:32.710 Controller Memory Buffer Support 00:33:32.710 ================================ 00:33:32.710 Supported: No 00:33:32.710 00:33:32.710 Persistent Memory Region Support 00:33:32.710 ================================ 00:33:32.710 Supported: No 00:33:32.710 00:33:32.710 Admin Command Set Attributes 00:33:32.710 ============================ 00:33:32.710 Security Send/Receive: Not Supported 00:33:32.710 Format NVM: Not Supported 00:33:32.710 Firmware Activate/Download: Not Supported 00:33:32.710 Namespace Management: Not Supported 00:33:32.710 Device Self-Test: Not Supported 00:33:32.710 Directives: Not Supported 00:33:32.710 NVMe-MI: Not Supported 00:33:32.710 Virtualization Management: Not Supported 00:33:32.710 Doorbell Buffer Config: Not Supported 00:33:32.710 Get LBA Status Capability: Not Supported 00:33:32.710 Command & Feature Lockdown Capability: Not Supported 00:33:32.710 Abort Command Limit: 4 00:33:32.710 Async Event Request Limit: 4 00:33:32.710 Number of Firmware Slots: N/A 00:33:32.710 Firmware Slot 1 Read-Only: N/A 00:33:32.710 Firmware Activation Without Reset: N/A 00:33:32.710 Multiple Update Detection Support: N/A 00:33:32.710 Firmware Update Granularity: No Information Provided 00:33:32.710 Per-Namespace SMART Log: Yes 00:33:32.710 Asymmetric Namespace Access Log Page: Supported 00:33:32.710 ANA Transition Time : 10 sec 00:33:32.710 00:33:32.710 Asymmetric Namespace Access Capabilities 00:33:32.710 ANA Optimized State : Supported 00:33:32.710 ANA Non-Optimized State : Supported 00:33:32.710 ANA Inaccessible State : Supported 00:33:32.710 ANA Persistent Loss State : Supported 00:33:32.710 ANA Change State : Supported 00:33:32.710 ANAGRPID is not changed : No 00:33:32.710 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:32.710 00:33:32.710 ANA Group Identifier Maximum : 128 00:33:32.710 Number of ANA Group Identifiers : 128 00:33:32.710 Max Number of Allowed Namespaces : 1024 00:33:32.710 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:32.710 Command Effects Log Page: Supported 00:33:32.710 Get Log Page Extended Data: Supported 00:33:32.710 Telemetry Log Pages: Not Supported 00:33:32.710 Persistent Event Log Pages: Not Supported 00:33:32.710 Supported Log Pages Log Page: May Support 00:33:32.710 Commands Supported & Effects Log Page: Not Supported 00:33:32.710 Feature Identifiers & Effects Log Page:May Support 00:33:32.710 NVMe-MI Commands & Effects Log Page: May Support 00:33:32.710 Data Area 4 for Telemetry Log: Not Supported 00:33:32.710 Error Log Page Entries Supported: 128 00:33:32.710 Keep Alive: Supported 00:33:32.710 Keep Alive Granularity: 1000 ms 00:33:32.710 00:33:32.710 NVM Command Set Attributes 00:33:32.710 ========================== 00:33:32.710 Submission Queue Entry Size 00:33:32.710 Max: 64 00:33:32.710 Min: 64 00:33:32.710 Completion Queue Entry Size 00:33:32.710 Max: 16 00:33:32.710 Min: 16 00:33:32.710 Number of Namespaces: 1024 00:33:32.710 Compare Command: Not Supported 00:33:32.710 Write Uncorrectable Command: Not Supported 00:33:32.710 Dataset Management Command: Supported 00:33:32.710 Write Zeroes Command: Supported 00:33:32.710 Set Features Save Field: Not Supported 00:33:32.710 Reservations: Not Supported 00:33:32.710 Timestamp: Not Supported 00:33:32.710 Copy: Not Supported 00:33:32.710 Volatile Write Cache: Present 00:33:32.710 Atomic Write Unit (Normal): 1 00:33:32.710 Atomic Write Unit (PFail): 1 00:33:32.710 Atomic Compare & Write Unit: 1 00:33:32.710 Fused Compare & Write: Not Supported 00:33:32.710 Scatter-Gather List 00:33:32.711 SGL Command Set: Supported 00:33:32.711 SGL Keyed: Not Supported 00:33:32.711 SGL Bit Bucket Descriptor: Not Supported 00:33:32.711 SGL Metadata Pointer: Not Supported 00:33:32.711 Oversized SGL: Not Supported 00:33:32.711 SGL Metadata Address: Not Supported 00:33:32.711 SGL Offset: Supported 00:33:32.711 Transport SGL Data Block: Not Supported 00:33:32.711 Replay Protected Memory Block: Not Supported 00:33:32.711 00:33:32.711 Firmware Slot Information 00:33:32.711 ========================= 00:33:32.711 Active slot: 0 00:33:32.711 00:33:32.711 Asymmetric Namespace Access 00:33:32.711 =========================== 00:33:32.711 Change Count : 0 00:33:32.711 Number of ANA Group Descriptors : 1 00:33:32.711 ANA Group Descriptor : 0 00:33:32.711 ANA Group ID : 1 00:33:32.711 Number of NSID Values : 1 00:33:32.711 Change Count : 0 00:33:32.711 ANA State : 1 00:33:32.711 Namespace Identifier : 1 00:33:32.711 00:33:32.711 Commands Supported and Effects 00:33:32.711 ============================== 00:33:32.711 Admin Commands 00:33:32.711 -------------- 00:33:32.711 Get Log Page (02h): Supported 00:33:32.711 Identify (06h): Supported 00:33:32.711 Abort (08h): Supported 00:33:32.711 Set Features (09h): Supported 00:33:32.711 Get Features (0Ah): Supported 00:33:32.711 Asynchronous Event Request (0Ch): Supported 00:33:32.711 Keep Alive (18h): Supported 00:33:32.711 I/O Commands 00:33:32.711 ------------ 00:33:32.711 Flush (00h): Supported 00:33:32.711 Write (01h): Supported LBA-Change 00:33:32.711 Read (02h): Supported 00:33:32.711 Write Zeroes (08h): Supported LBA-Change 00:33:32.711 Dataset Management (09h): Supported 00:33:32.711 00:33:32.711 Error Log 00:33:32.711 ========= 00:33:32.711 Entry: 0 00:33:32.711 Error Count: 0x3 00:33:32.711 Submission Queue Id: 0x0 00:33:32.711 Command Id: 0x5 00:33:32.711 Phase Bit: 0 00:33:32.711 Status Code: 0x2 00:33:32.711 Status Code Type: 0x0 00:33:32.711 Do Not Retry: 1 00:33:32.711 Error Location: 0x28 00:33:32.711 LBA: 0x0 00:33:32.711 Namespace: 0x0 00:33:32.711 Vendor Log Page: 0x0 00:33:32.711 ----------- 00:33:32.711 Entry: 1 00:33:32.711 Error Count: 0x2 00:33:32.711 Submission Queue Id: 0x0 00:33:32.711 Command Id: 0x5 00:33:32.711 Phase Bit: 0 00:33:32.711 Status Code: 0x2 00:33:32.711 Status Code Type: 0x0 00:33:32.711 Do Not Retry: 1 00:33:32.711 Error Location: 0x28 00:33:32.711 LBA: 0x0 00:33:32.711 Namespace: 0x0 00:33:32.711 Vendor Log Page: 0x0 00:33:32.711 ----------- 00:33:32.711 Entry: 2 00:33:32.711 Error Count: 0x1 00:33:32.711 Submission Queue Id: 0x0 00:33:32.711 Command Id: 0x4 00:33:32.711 Phase Bit: 0 00:33:32.711 Status Code: 0x2 00:33:32.711 Status Code Type: 0x0 00:33:32.711 Do Not Retry: 1 00:33:32.711 Error Location: 0x28 00:33:32.711 LBA: 0x0 00:33:32.711 Namespace: 0x0 00:33:32.711 Vendor Log Page: 0x0 00:33:32.711 00:33:32.711 Number of Queues 00:33:32.711 ================ 00:33:32.711 Number of I/O Submission Queues: 128 00:33:32.711 Number of I/O Completion Queues: 128 00:33:32.711 00:33:32.711 ZNS Specific Controller Data 00:33:32.711 ============================ 00:33:32.711 Zone Append Size Limit: 0 00:33:32.711 00:33:32.711 00:33:32.711 Active Namespaces 00:33:32.711 ================= 00:33:32.711 get_feature(0x05) failed 00:33:32.711 Namespace ID:1 00:33:32.711 Command Set Identifier: NVM (00h) 00:33:32.711 Deallocate: Supported 00:33:32.711 Deallocated/Unwritten Error: Not Supported 00:33:32.711 Deallocated Read Value: Unknown 00:33:32.711 Deallocate in Write Zeroes: Not Supported 00:33:32.711 Deallocated Guard Field: 0xFFFF 00:33:32.711 Flush: Supported 00:33:32.711 Reservation: Not Supported 00:33:32.711 Namespace Sharing Capabilities: Multiple Controllers 00:33:32.711 Size (in LBAs): 1953525168 (931GiB) 00:33:32.711 Capacity (in LBAs): 1953525168 (931GiB) 00:33:32.711 Utilization (in LBAs): 1953525168 (931GiB) 00:33:32.711 UUID: 4bad149d-3d9c-4c99-99bc-1a884957362e 00:33:32.711 Thin Provisioning: Not Supported 00:33:32.711 Per-NS Atomic Units: Yes 00:33:32.711 Atomic Boundary Size (Normal): 0 00:33:32.711 Atomic Boundary Size (PFail): 0 00:33:32.711 Atomic Boundary Offset: 0 00:33:32.711 NGUID/EUI64 Never Reused: No 00:33:32.711 ANA group ID: 1 00:33:32.711 Namespace Write Protected: No 00:33:32.711 Number of LBA Formats: 1 00:33:32.711 Current LBA Format: LBA Format #00 00:33:32.711 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:32.711 00:33:32.711 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:32.711 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:32.711 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:33:32.711 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:32.711 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:33:32.711 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:32.711 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:32.711 rmmod nvme_tcp 00:33:32.711 rmmod nvme_fabrics 00:33:32.711 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:32.711 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:33:32.711 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:33:32.711 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:33:32.711 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:32.711 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:32.711 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:32.711 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:32.711 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:32.711 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:32.711 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:32.712 06:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:35.251 06:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:35.251 06:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:35.251 06:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:35.251 06:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:33:35.251 06:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:35.251 06:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:35.251 06:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:35.251 06:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:35.251 06:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:35.251 06:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:35.251 06:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:36.185 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:36.185 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:36.185 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:36.185 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:36.185 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:36.185 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:36.185 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:36.185 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:36.185 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:36.185 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:36.185 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:36.185 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:36.185 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:36.185 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:36.185 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:36.185 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:37.120 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:37.120 00:33:37.120 real 0m9.427s 00:33:37.120 user 0m1.977s 00:33:37.120 sys 0m3.449s 00:33:37.120 06:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:37.120 06:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:37.120 ************************************ 00:33:37.120 END TEST nvmf_identify_kernel_target 00:33:37.120 ************************************ 00:33:37.120 06:25:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:37.120 06:25:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:37.120 06:25:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:37.120 06:25:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.120 ************************************ 00:33:37.120 START TEST nvmf_auth_host 00:33:37.120 ************************************ 00:33:37.120 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:37.120 * Looking for test storage... 00:33:37.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:33:37.380 06:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:39.283 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:39.283 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:39.283 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:39.284 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:39.284 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:39.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:39.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:33:39.284 00:33:39.284 --- 10.0.0.2 ping statistics --- 00:33:39.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.284 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:39.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:39.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:33:39.284 00:33:39.284 --- 10.0.0.1 ping statistics --- 00:33:39.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.284 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=290321 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 290321 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 290321 ']' 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:39.284 06:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.223 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:40.223 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:33:40.223 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:40.223 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:40.223 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.223 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:40.223 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:33:40.223 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:33:40.223 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:40.223 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:40.223 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:40.223 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:40.223 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:40.223 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:40.223 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=918a026bb345556879f388241766e1fc 00:33:40.223 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:40.223 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Pl8 00:33:40.223 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 918a026bb345556879f388241766e1fc 0 00:33:40.223 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 918a026bb345556879f388241766e1fc 0 00:33:40.223 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:40.223 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:40.223 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=918a026bb345556879f388241766e1fc 00:33:40.223 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:40.223 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Pl8 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Pl8 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Pl8 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e60fc5433db8f57aa528af1c9aea5faf6e4e697e2977f66af25510899c02321c 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.tBU 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e60fc5433db8f57aa528af1c9aea5faf6e4e697e2977f66af25510899c02321c 3 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e60fc5433db8f57aa528af1c9aea5faf6e4e697e2977f66af25510899c02321c 3 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e60fc5433db8f57aa528af1c9aea5faf6e4e697e2977f66af25510899c02321c 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.tBU 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.tBU 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.tBU 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c346f2c10e192cc9884a5826b697f9306522589f0d879231 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.FFU 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c346f2c10e192cc9884a5826b697f9306522589f0d879231 0 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c346f2c10e192cc9884a5826b697f9306522589f0d879231 0 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c346f2c10e192cc9884a5826b697f9306522589f0d879231 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.FFU 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.FFU 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.FFU 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f40f6ec3b2a217ef7565f94a5e856c26d69311bccca327dc 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.4DS 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f40f6ec3b2a217ef7565f94a5e856c26d69311bccca327dc 2 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f40f6ec3b2a217ef7565f94a5e856c26d69311bccca327dc 2 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f40f6ec3b2a217ef7565f94a5e856c26d69311bccca327dc 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.4DS 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.4DS 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.4DS 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:40.483 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=556105e42019bf8ace6050e0a63adb42 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ocC 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 556105e42019bf8ace6050e0a63adb42 1 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 556105e42019bf8ace6050e0a63adb42 1 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=556105e42019bf8ace6050e0a63adb42 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ocC 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ocC 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ocC 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b79a667346145f0116acf4c410b9d841 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ZlH 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b79a667346145f0116acf4c410b9d841 1 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b79a667346145f0116acf4c410b9d841 1 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b79a667346145f0116acf4c410b9d841 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:33:40.484 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ZlH 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ZlH 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.ZlH 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9282ba4285e19477904bbfac2d3e6e5003b2216f4dbca5c5 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ZcP 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9282ba4285e19477904bbfac2d3e6e5003b2216f4dbca5c5 2 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9282ba4285e19477904bbfac2d3e6e5003b2216f4dbca5c5 2 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9282ba4285e19477904bbfac2d3e6e5003b2216f4dbca5c5 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ZcP 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ZcP 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ZcP 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0fe249703b08d5a97894cb91dc4569e3 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.YTt 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0fe249703b08d5a97894cb91dc4569e3 0 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0fe249703b08d5a97894cb91dc4569e3 0 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0fe249703b08d5a97894cb91dc4569e3 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.YTt 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.YTt 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.YTt 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fe9a9294ccdebcb6aaa471ce5c056b08d68aae197bb7803fc8dc73211e536a3d 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:33:40.742 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Wjk 00:33:40.743 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fe9a9294ccdebcb6aaa471ce5c056b08d68aae197bb7803fc8dc73211e536a3d 3 00:33:40.743 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fe9a9294ccdebcb6aaa471ce5c056b08d68aae197bb7803fc8dc73211e536a3d 3 00:33:40.743 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:40.743 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:40.743 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fe9a9294ccdebcb6aaa471ce5c056b08d68aae197bb7803fc8dc73211e536a3d 00:33:40.743 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:33:40.743 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:40.743 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Wjk 00:33:40.743 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Wjk 00:33:40.743 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Wjk 00:33:40.743 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:33:40.743 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 290321 00:33:40.743 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 290321 ']' 00:33:40.743 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:40.743 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:40.743 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:40.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:40.743 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:40.743 06:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Pl8 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.tBU ]] 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tBU 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.FFU 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.4DS ]] 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4DS 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ocC 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.ZlH ]] 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZlH 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ZcP 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.YTt ]] 00:33:41.001 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.YTt 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Wjk 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:41.002 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:41.259 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:41.259 06:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:42.193 Waiting for block devices as requested 00:33:42.193 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:42.193 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:42.452 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:42.452 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:42.452 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:42.710 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:42.710 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:42.710 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:42.710 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:42.968 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:42.968 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:42.968 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:42.968 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:43.225 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:43.225 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:43.225 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:43.225 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:43.792 No valid GPT data, bailing 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:43.792 00:33:43.792 Discovery Log Number of Records 2, Generation counter 2 00:33:43.792 =====Discovery Log Entry 0====== 00:33:43.792 trtype: tcp 00:33:43.792 adrfam: ipv4 00:33:43.792 subtype: current discovery subsystem 00:33:43.792 treq: not specified, sq flow control disable supported 00:33:43.792 portid: 1 00:33:43.792 trsvcid: 4420 00:33:43.792 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:43.792 traddr: 10.0.0.1 00:33:43.792 eflags: none 00:33:43.792 sectype: none 00:33:43.792 =====Discovery Log Entry 1====== 00:33:43.792 trtype: tcp 00:33:43.792 adrfam: ipv4 00:33:43.792 subtype: nvme subsystem 00:33:43.792 treq: not specified, sq flow control disable supported 00:33:43.792 portid: 1 00:33:43.792 trsvcid: 4420 00:33:43.792 subnqn: nqn.2024-02.io.spdk:cnode0 00:33:43.792 traddr: 10.0.0.1 00:33:43.792 eflags: none 00:33:43.792 sectype: none 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:43.792 06:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: ]] 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.792 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.054 nvme0n1 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: ]] 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:44.054 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:44.055 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.055 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:44.055 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.055 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.055 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.055 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.055 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:44.055 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:44.055 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:44.055 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.055 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.055 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:44.055 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.055 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:44.055 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:44.055 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:44.055 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:44.055 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.055 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.342 nvme0n1 00:33:44.342 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.342 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.342 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.342 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.342 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.342 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.342 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.342 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.342 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.342 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.342 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.342 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:44.342 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:44.342 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.342 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:44.342 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:44.342 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:44.342 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:33:44.342 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:33:44.342 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:44.342 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: ]] 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.343 nvme0n1 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.343 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: ]] 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.602 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.603 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:44.603 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.603 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:44.603 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:44.603 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:44.603 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:44.603 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.603 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.603 nvme0n1 00:33:44.603 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.603 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.603 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.603 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.603 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.603 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.603 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.603 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.603 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.603 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: ]] 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.862 06:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.862 nvme0n1 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.862 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.122 nvme0n1 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: ]] 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.122 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.382 nvme0n1 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: ]] 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.382 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:45.383 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.383 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:45.383 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:45.383 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:45.383 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:45.383 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.383 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.641 nvme0n1 00:33:45.641 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.641 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.641 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: ]] 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.642 06:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.900 nvme0n1 00:33:45.900 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.900 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.900 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.900 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.900 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.900 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.900 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.900 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.900 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.900 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.900 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: ]] 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.901 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.161 nvme0n1 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.161 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.421 nvme0n1 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: ]] 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.421 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:46.422 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:46.422 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:46.422 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.422 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:46.422 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.422 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.422 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.422 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.422 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:46.422 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:46.422 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:46.422 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:46.422 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:46.422 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:46.422 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:46.422 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:46.422 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:46.422 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:46.422 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:46.422 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.422 06:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.681 nvme0n1 00:33:46.681 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.681 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.681 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.681 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.681 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: ]] 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.939 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.197 nvme0n1 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: ]] 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:47.197 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:47.198 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.198 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.455 nvme0n1 00:33:47.455 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.455 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:47.455 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.455 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:47.455 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.455 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.455 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.455 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.455 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.455 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: ]] 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.712 06:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.970 nvme0n1 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:47.970 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:47.971 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:47.971 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:47.971 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:47.971 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:47.971 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:47.971 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.971 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.229 nvme0n1 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: ]] 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.229 06:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.799 nvme0n1 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: ]] 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.799 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.124 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.124 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.124 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:49.124 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:49.124 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:49.124 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.124 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.124 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:49.124 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.124 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:49.124 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:49.124 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:49.124 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:49.124 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.124 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.382 nvme0n1 00:33:49.382 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.382 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:49.382 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:49.382 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.382 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.382 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.641 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.641 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:49.641 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: ]] 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.642 06:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.211 nvme0n1 00:33:50.211 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.211 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:50.211 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:50.211 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.211 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.211 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.211 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.211 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:50.211 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.211 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.211 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.211 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:50.211 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:33:50.211 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: ]] 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.212 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.780 nvme0n1 00:33:50.780 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.780 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:50.780 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.780 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.780 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:50.780 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.780 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.780 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:50.780 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.780 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.780 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.780 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:50.780 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:33:50.780 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:50.780 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:50.780 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:50.780 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:50.780 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:33:50.780 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:50.780 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:50.780 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:50.780 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:33:50.780 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:50.780 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:33:50.781 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:50.781 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:50.781 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:50.781 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:50.781 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:50.781 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:50.781 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.781 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.781 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.781 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:50.781 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:50.781 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:50.781 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:50.781 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:50.781 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:50.781 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:50.781 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:50.781 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:50.781 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:50.781 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:50.781 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:50.781 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.781 06:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.351 nvme0n1 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: ]] 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.351 06:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.288 nvme0n1 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: ]] 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.288 06:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.228 nvme0n1 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: ]] 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:53.228 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:53.229 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:53.229 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:53.229 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:53.229 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:53.229 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:53.229 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.229 06:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.166 nvme0n1 00:33:54.166 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.166 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:54.166 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:54.166 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.166 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.166 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: ]] 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.425 06:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.363 nvme0n1 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:55.363 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:55.364 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:55.364 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.364 06:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.303 nvme0n1 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: ]] 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.303 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:56.304 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:56.304 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:56.304 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.304 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.304 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:56.304 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.304 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:56.304 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:56.304 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:56.304 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:56.304 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.304 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.564 nvme0n1 00:33:56.564 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.564 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.564 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.564 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.564 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.564 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.564 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.564 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.564 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.564 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.564 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.564 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.564 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:33:56.564 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.564 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:56.564 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:56.564 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:56.564 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:33:56.564 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:33:56.564 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:56.564 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:56.564 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:33:56.564 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: ]] 00:33:56.564 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:33:56.564 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:33:56.565 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.565 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:56.565 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:56.565 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:56.565 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.565 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:56.565 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.565 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.565 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.565 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.565 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:56.565 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:56.565 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:56.565 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.565 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.565 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:56.565 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.565 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:56.565 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:56.565 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:56.565 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:56.565 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.565 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.825 nvme0n1 00:33:56.825 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.825 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.825 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.825 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.825 06:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: ]] 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:56.825 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:56.826 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:56.826 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.826 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.084 nvme0n1 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: ]] 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:57.084 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.085 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.343 nvme0n1 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.343 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.602 nvme0n1 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: ]] 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.602 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:57.603 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:57.603 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:57.603 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.603 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.603 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:57.603 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.603 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:57.603 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:57.603 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:57.603 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:57.603 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.603 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.862 nvme0n1 00:33:57.862 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.862 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.862 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.862 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.862 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.862 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.862 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.862 06:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.862 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.862 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.862 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.862 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: ]] 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.863 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.123 nvme0n1 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: ]] 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.123 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.382 nvme0n1 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: ]] 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.382 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.640 nvme0n1 00:33:58.640 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.640 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.640 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.640 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.640 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.640 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.640 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.641 06:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.899 nvme0n1 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: ]] 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:58.899 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:58.900 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:58.900 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.900 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.900 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:58.900 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.900 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:58.900 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:58.900 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:58.900 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:58.900 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.900 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.157 nvme0n1 00:33:59.157 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.157 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.157 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.157 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.157 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.157 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.157 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.157 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.157 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.157 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.157 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.157 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.157 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:33:59.157 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.157 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:59.157 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:59.157 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:59.157 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: ]] 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.158 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.726 nvme0n1 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: ]] 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.726 06:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.986 nvme0n1 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: ]] 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.986 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.246 nvme0n1 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.246 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.505 nvme0n1 00:34:00.505 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.505 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.505 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.505 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.505 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.763 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.763 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.763 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.763 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.763 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.763 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.763 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:00.763 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.763 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:00.763 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: ]] 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.764 06:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.332 nvme0n1 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: ]] 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.332 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.902 nvme0n1 00:34:01.902 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.902 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.902 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.902 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.902 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.902 06:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: ]] 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.902 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.474 nvme0n1 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: ]] 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:02.474 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:02.475 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.475 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.475 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:02.475 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.475 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:02.475 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:02.475 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:02.475 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:02.475 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.475 06:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.042 nvme0n1 00:34:03.042 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.042 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.042 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.042 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.042 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.043 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.610 nvme0n1 00:34:03.610 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.610 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.610 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.610 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.610 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.610 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.610 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.610 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.610 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.610 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.610 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.610 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:03.610 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.610 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:03.610 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.610 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:03.610 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:03.610 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:03.610 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:34:03.610 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:34:03.610 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:03.610 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:03.610 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:34:03.610 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: ]] 00:34:03.610 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:34:03.610 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:03.611 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.611 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:03.611 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:03.611 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:03.611 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.611 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:03.611 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.611 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.611 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.611 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.611 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:03.611 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:03.611 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:03.611 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.611 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.611 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:03.611 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.611 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:03.611 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:03.611 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:03.611 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:03.611 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.611 06:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.547 nvme0n1 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: ]] 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:04.547 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:04.548 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:04.548 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:04.548 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.548 06:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.921 nvme0n1 00:34:05.921 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.921 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.921 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.921 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.921 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.921 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.921 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.921 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.921 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.921 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.921 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.921 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.921 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:05.921 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.921 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:05.921 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:05.921 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: ]] 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.922 06:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.862 nvme0n1 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: ]] 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.862 06:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.862 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.862 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.862 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:06.862 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:06.862 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:06.862 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.862 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.862 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:06.862 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.862 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:06.862 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:06.862 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:06.862 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:06.862 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.862 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.802 nvme0n1 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.802 06:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.742 nvme0n1 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: ]] 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.742 06:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.002 nvme0n1 00:34:09.002 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.002 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.002 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.002 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.002 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.002 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.002 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.002 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.002 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.002 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.002 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.002 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.002 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:09.002 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.002 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:09.002 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:09.002 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: ]] 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.003 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.262 nvme0n1 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: ]] 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.262 nvme0n1 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.262 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.520 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.520 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.520 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.520 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.520 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.520 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.520 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: ]] 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.521 nvme0n1 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.521 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.778 06:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.778 nvme0n1 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: ]] 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.778 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.077 nvme0n1 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:34:10.077 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: ]] 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.078 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.355 nvme0n1 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: ]] 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.355 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.615 nvme0n1 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: ]] 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.615 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.616 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:10.616 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.616 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:10.616 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:10.616 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:10.616 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:10.616 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.616 06:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.876 nvme0n1 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.876 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.135 nvme0n1 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: ]] 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.135 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.700 nvme0n1 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: ]] 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.700 06:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.957 nvme0n1 00:34:11.957 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.957 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.957 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.957 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.957 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.957 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.957 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.957 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.957 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.957 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.957 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.957 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: ]] 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.958 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.217 nvme0n1 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: ]] 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:12.217 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:12.218 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:12.218 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.218 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.785 nvme0n1 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.785 06:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.044 nvme0n1 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: ]] 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:13.044 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:13.045 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.045 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.045 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:13.045 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.045 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:13.045 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:13.045 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:13.045 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:13.045 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.045 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.612 nvme0n1 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: ]] 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.612 06:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.180 nvme0n1 00:34:14.180 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.180 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.180 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.180 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.180 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.180 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.180 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.180 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.180 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.180 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: ]] 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.439 06:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.006 nvme0n1 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: ]] 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.006 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.574 nvme0n1 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:15.574 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.575 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:15.575 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:15.575 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:15.575 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:15.575 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.575 06:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.141 nvme0n1 00:34:16.141 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.141 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4YTAyNmJiMzQ1NTU2ODc5ZjM4ODI0MTc2NmUxZmMz1hbj: 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: ]] 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTYwZmM1NDMzZGI4ZjU3YWE1MjhhZjFjOWFlYTVmYWY2ZTRlNjk3ZTI5NzdmNjZhZjI1NTEwODk5YzAyMzIxY//Z8z0=: 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.142 06:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.074 nvme0n1 00:34:17.074 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.074 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.074 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.074 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.074 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.331 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.331 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.331 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.331 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.331 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.331 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.331 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.331 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:17.331 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.331 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:17.331 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:17.331 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:17.331 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:34:17.331 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:34:17.331 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:17.331 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:17.331 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:34:17.331 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: ]] 00:34:17.331 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:34:17.332 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:17.332 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.332 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:17.332 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:17.332 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:17.332 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.332 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:17.332 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.332 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.332 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.332 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.332 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:17.332 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:17.332 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:17.332 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.332 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.332 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:17.332 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.332 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:17.332 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:17.332 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:17.332 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:17.332 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.332 06:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.269 nvme0n1 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTU2MTA1ZTQyMDE5YmY4YWNlNjA1MGUwYTYzYWRiNDJiug5h: 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: ]] 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5YTY2NzM0NjE0NWYwMTE2YWNmNGM0MTBiOWQ4NDFdsdXC: 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:18.269 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:18.270 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.270 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.270 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:18.270 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.270 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:18.270 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:18.270 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:18.270 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:18.270 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.270 06:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.207 nvme0n1 00:34:19.207 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.207 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.207 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.207 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.207 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.207 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI4MmJhNDI4NWUxOTQ3NzkwNGJiZmFjMmQzZTZlNTAwM2IyMjE2ZjRkYmNhNWM173FtQA==: 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: ]] 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGZlMjQ5NzAzYjA4ZDVhOTc4OTRjYjkxZGM0NTY5ZTM1stdl: 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.466 06:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.402 nvme0n1 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU5YTkyOTRjY2RlYmNiNmFhYTQ3MWNlNWMwNTZiMDhkNjhhYWUxOTdiYjc4MDNmYzhkYzczMjExZTUzNmEzZGaoQgI=: 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:20.402 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:20.403 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:20.403 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.403 06:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.338 nvme0n1 00:34:21.338 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.338 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.338 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.338 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.338 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.338 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.596 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.596 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.596 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.596 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.596 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.596 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:21.596 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.596 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:21.596 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:21.596 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:21.596 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:34:21.596 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:34:21.596 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:21.596 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzM0NmYyYzEwZTE5MmNjOTg4NGE1ODI2YjY5N2Y5MzA2NTIyNTg5ZjBkODc5MjMx2SSw0Q==: 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: ]] 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQwZjZlYzNiMmEyMTdlZjc1NjVmOTRhNWU4NTZjMjZkNjkzMTFiY2NjYTMyN2Rj78IrZg==: 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.597 request: 00:34:21.597 { 00:34:21.597 "name": "nvme0", 00:34:21.597 "trtype": "tcp", 00:34:21.597 "traddr": "10.0.0.1", 00:34:21.597 "adrfam": "ipv4", 00:34:21.597 "trsvcid": "4420", 00:34:21.597 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:21.597 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:21.597 "prchk_reftag": false, 00:34:21.597 "prchk_guard": false, 00:34:21.597 "hdgst": false, 00:34:21.597 "ddgst": false, 00:34:21.597 "method": "bdev_nvme_attach_controller", 00:34:21.597 "req_id": 1 00:34:21.597 } 00:34:21.597 Got JSON-RPC error response 00:34:21.597 response: 00:34:21.597 { 00:34:21.597 "code": -5, 00:34:21.597 "message": "Input/output error" 00:34:21.597 } 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.597 request: 00:34:21.597 { 00:34:21.597 "name": "nvme0", 00:34:21.597 "trtype": "tcp", 00:34:21.597 "traddr": "10.0.0.1", 00:34:21.597 "adrfam": "ipv4", 00:34:21.597 "trsvcid": "4420", 00:34:21.597 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:21.597 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:21.597 "prchk_reftag": false, 00:34:21.597 "prchk_guard": false, 00:34:21.597 "hdgst": false, 00:34:21.597 "ddgst": false, 00:34:21.597 "dhchap_key": "key2", 00:34:21.597 "method": "bdev_nvme_attach_controller", 00:34:21.597 "req_id": 1 00:34:21.597 } 00:34:21.597 Got JSON-RPC error response 00:34:21.597 response: 00:34:21.597 { 00:34:21.597 "code": -5, 00:34:21.597 "message": "Input/output error" 00:34:21.597 } 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.597 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.856 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:21.856 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:21.856 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.856 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.856 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.856 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.856 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.856 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:21.856 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.856 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:21.856 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:21.856 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:21.856 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:21.856 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:21.856 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:21.856 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:21.856 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:21.856 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:21.856 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:21.856 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:21.856 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.856 06:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.856 request: 00:34:21.856 { 00:34:21.856 "name": "nvme0", 00:34:21.856 "trtype": "tcp", 00:34:21.856 "traddr": "10.0.0.1", 00:34:21.856 "adrfam": "ipv4", 00:34:21.856 "trsvcid": "4420", 00:34:21.856 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:21.856 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:21.856 "prchk_reftag": false, 00:34:21.856 "prchk_guard": false, 00:34:21.856 "hdgst": false, 00:34:21.856 "ddgst": false, 00:34:21.856 "dhchap_key": "key1", 00:34:21.856 "dhchap_ctrlr_key": "ckey2", 00:34:21.856 "method": "bdev_nvme_attach_controller", 00:34:21.856 "req_id": 1 00:34:21.856 } 00:34:21.856 Got JSON-RPC error response 00:34:21.856 response: 00:34:21.856 { 00:34:21.856 "code": -5, 00:34:21.856 "message": "Input/output error" 00:34:21.856 } 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:21.856 rmmod nvme_tcp 00:34:21.856 rmmod nvme_fabrics 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 290321 ']' 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 290321 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 290321 ']' 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 290321 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 290321 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 290321' 00:34:21.856 killing process with pid 290321 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 290321 00:34:21.856 06:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 290321 00:34:23.247 06:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:23.247 06:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:23.247 06:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:23.247 06:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:23.247 06:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:23.247 06:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:23.247 06:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:23.247 06:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:25.154 06:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:25.154 06:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:25.154 06:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:25.154 06:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:25.154 06:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:25.154 06:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:34:25.154 06:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:25.154 06:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:25.154 06:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:25.154 06:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:25.154 06:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:34:25.154 06:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:34:25.154 06:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:26.529 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:26.529 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:26.529 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:26.529 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:26.529 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:26.529 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:26.529 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:26.529 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:26.529 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:26.529 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:26.529 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:26.529 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:26.529 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:26.529 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:26.529 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:26.529 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:27.469 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:27.469 06:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Pl8 /tmp/spdk.key-null.FFU /tmp/spdk.key-sha256.ocC /tmp/spdk.key-sha384.ZcP /tmp/spdk.key-sha512.Wjk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:27.469 06:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:28.844 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:28.844 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:28.844 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:28.844 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:28.844 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:28.844 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:28.844 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:28.844 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:28.844 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:28.844 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:28.844 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:28.844 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:28.844 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:28.844 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:28.844 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:28.844 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:28.844 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:28.844 00:34:28.844 real 0m51.611s 00:34:28.844 user 0m49.114s 00:34:28.844 sys 0m6.040s 00:34:28.844 06:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:28.844 06:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.844 ************************************ 00:34:28.844 END TEST nvmf_auth_host 00:34:28.844 ************************************ 00:34:28.844 06:26:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:34:28.844 06:26:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:28.844 06:26:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.845 ************************************ 00:34:28.845 START TEST nvmf_digest 00:34:28.845 ************************************ 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:28.845 * Looking for test storage... 00:34:28.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:34:28.845 06:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:30.751 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:30.751 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:34:30.751 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:30.751 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:30.751 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:30.751 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:30.751 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:30.751 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:34:30.751 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:30.751 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:34:30.751 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:34:30.751 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:34:30.751 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:34:30.751 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:34:30.751 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:34:30.751 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:30.751 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:30.751 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:30.751 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:30.751 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:30.751 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:30.751 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:30.752 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:30.752 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:30.752 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:30.752 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:30.752 06:26:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:30.752 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:30.752 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:30.752 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:30.752 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:30.752 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:30.752 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:30.752 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:30.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:30.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:34:30.752 00:34:30.752 --- 10.0.0.2 ping statistics --- 00:34:30.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:30.752 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:34:30.752 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:30.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:30.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:34:30.752 00:34:30.752 --- 10.0.0.1 ping statistics --- 00:34:30.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:30.752 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:34:30.752 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:30.752 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:34:30.752 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:30.752 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:30.752 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:30.752 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:30.752 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:30.752 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:30.752 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:31.010 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:31.010 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:31.010 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:31.010 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:31.010 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:31.010 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:31.010 ************************************ 00:34:31.010 START TEST nvmf_digest_clean 00:34:31.010 ************************************ 00:34:31.010 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:34:31.010 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:31.011 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:31.011 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:31.011 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:31.011 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:31.011 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:31.011 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:31.011 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:31.011 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=299977 00:34:31.011 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:31.011 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 299977 00:34:31.011 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 299977 ']' 00:34:31.011 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:31.011 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:31.011 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:31.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:31.011 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:31.011 06:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:31.011 [2024-07-26 06:26:42.198908] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:31.011 [2024-07-26 06:26:42.199084] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:31.011 EAL: No free 2048 kB hugepages reported on node 1 00:34:31.011 [2024-07-26 06:26:42.328312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:31.270 [2024-07-26 06:26:42.576844] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:31.270 [2024-07-26 06:26:42.576929] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:31.270 [2024-07-26 06:26:42.576957] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:31.270 [2024-07-26 06:26:42.576982] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:31.270 [2024-07-26 06:26:42.577003] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:31.270 [2024-07-26 06:26:42.577049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:31.839 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:31.839 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:31.839 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:31.839 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:31.839 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:31.839 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:31.839 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:31.839 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:31.839 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:31.839 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.839 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:32.407 null0 00:34:32.407 [2024-07-26 06:26:43.552170] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:32.407 [2024-07-26 06:26:43.576423] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:32.407 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.407 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:32.407 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:32.407 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:32.407 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:32.407 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:32.407 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:32.407 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:32.407 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=300136 00:34:32.407 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:32.407 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 300136 /var/tmp/bperf.sock 00:34:32.407 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 300136 ']' 00:34:32.407 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:32.407 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:32.407 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:32.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:32.407 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:32.407 06:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:32.407 [2024-07-26 06:26:43.659220] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:32.407 [2024-07-26 06:26:43.659384] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid300136 ] 00:34:32.408 EAL: No free 2048 kB hugepages reported on node 1 00:34:32.667 [2024-07-26 06:26:43.785084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:32.925 [2024-07-26 06:26:44.017392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:33.491 06:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:33.491 06:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:33.491 06:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:33.491 06:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:33.491 06:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:34.058 06:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:34.059 06:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:34.317 nvme0n1 00:34:34.317 06:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:34.317 06:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:34.576 Running I/O for 2 seconds... 00:34:36.481 00:34:36.481 Latency(us) 00:34:36.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:36.481 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:36.481 nvme0n1 : 2.01 14475.23 56.54 0.00 0.00 8827.41 4514.70 27379.48 00:34:36.481 =================================================================================================================== 00:34:36.481 Total : 14475.23 56.54 0.00 0.00 8827.41 4514.70 27379.48 00:34:36.481 0 00:34:36.481 06:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:36.481 06:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:36.481 06:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:36.481 06:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:36.481 | select(.opcode=="crc32c") 00:34:36.481 | "\(.module_name) \(.executed)"' 00:34:36.481 06:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:36.739 06:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:36.739 06:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:36.739 06:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:36.739 06:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:36.739 06:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 300136 00:34:36.739 06:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 300136 ']' 00:34:36.739 06:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 300136 00:34:36.739 06:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:34:36.739 06:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:36.739 06:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 300136 00:34:36.739 06:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:36.739 06:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:36.739 06:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 300136' 00:34:36.739 killing process with pid 300136 00:34:36.739 06:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 300136 00:34:36.739 Received shutdown signal, test time was about 2.000000 seconds 00:34:36.739 00:34:36.739 Latency(us) 00:34:36.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:36.739 =================================================================================================================== 00:34:36.739 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:36.739 06:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 300136 00:34:38.109 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:34:38.109 06:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:34:38.109 06:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:38.109 06:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:38.109 06:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:38.109 06:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:38.109 06:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:38.109 06:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:38.109 06:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=300797 00:34:38.109 06:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:38.109 06:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 300797 /var/tmp/bperf.sock 00:34:38.109 06:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 300797 ']' 00:34:38.109 06:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:38.109 06:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:38.109 06:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:38.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:38.109 06:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:38.109 06:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:38.109 [2024-07-26 06:26:49.160010] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:38.109 [2024-07-26 06:26:49.160184] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid300797 ] 00:34:38.109 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:38.109 Zero copy mechanism will not be used. 00:34:38.109 EAL: No free 2048 kB hugepages reported on node 1 00:34:38.109 [2024-07-26 06:26:49.281602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:38.368 [2024-07-26 06:26:49.529793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:38.934 06:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:38.934 06:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:38.934 06:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:38.934 06:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:38.934 06:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:39.499 06:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:39.499 06:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:39.758 nvme0n1 00:34:40.018 06:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:40.018 06:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:40.018 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:40.018 Zero copy mechanism will not be used. 00:34:40.018 Running I/O for 2 seconds... 00:34:41.944 00:34:41.944 Latency(us) 00:34:41.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:41.944 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:41.944 nvme0n1 : 2.00 3411.75 426.47 0.00 0.00 4682.52 1796.17 7039.05 00:34:41.944 =================================================================================================================== 00:34:41.944 Total : 3411.75 426.47 0.00 0.00 4682.52 1796.17 7039.05 00:34:41.944 0 00:34:41.944 06:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:41.944 06:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:41.944 06:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:41.944 06:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:41.944 06:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:41.944 | select(.opcode=="crc32c") 00:34:41.944 | "\(.module_name) \(.executed)"' 00:34:42.228 06:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:42.228 06:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:42.228 06:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:42.228 06:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:42.228 06:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 300797 00:34:42.228 06:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 300797 ']' 00:34:42.228 06:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 300797 00:34:42.228 06:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:34:42.228 06:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:42.228 06:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 300797 00:34:42.228 06:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:42.228 06:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:42.228 06:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 300797' 00:34:42.228 killing process with pid 300797 00:34:42.228 06:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 300797 00:34:42.228 Received shutdown signal, test time was about 2.000000 seconds 00:34:42.228 00:34:42.228 Latency(us) 00:34:42.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:42.228 =================================================================================================================== 00:34:42.228 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:42.228 06:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 300797 00:34:43.165 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:34:43.423 06:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:34:43.423 06:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:43.423 06:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:43.423 06:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:43.423 06:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:43.423 06:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:43.423 06:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:43.423 06:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=301402 00:34:43.423 06:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:43.423 06:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 301402 /var/tmp/bperf.sock 00:34:43.423 06:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 301402 ']' 00:34:43.423 06:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:43.423 06:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:43.423 06:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:43.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:43.423 06:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:43.423 06:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:43.423 [2024-07-26 06:26:54.595791] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:43.423 [2024-07-26 06:26:54.595931] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid301402 ] 00:34:43.423 EAL: No free 2048 kB hugepages reported on node 1 00:34:43.423 [2024-07-26 06:26:54.728028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.682 [2024-07-26 06:26:54.985776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:44.248 06:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:44.248 06:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:44.248 06:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:44.248 06:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:44.248 06:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:44.813 06:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:44.813 06:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:45.380 nvme0n1 00:34:45.380 06:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:45.380 06:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:45.380 Running I/O for 2 seconds... 00:34:47.915 00:34:47.915 Latency(us) 00:34:47.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:47.915 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:47.915 nvme0n1 : 2.01 17143.97 66.97 0.00 0.00 7456.02 3422.44 16408.27 00:34:47.915 =================================================================================================================== 00:34:47.915 Total : 17143.97 66.97 0.00 0.00 7456.02 3422.44 16408.27 00:34:47.915 0 00:34:47.915 06:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:47.915 06:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:47.915 06:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:47.916 06:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:47.916 06:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:47.916 | select(.opcode=="crc32c") 00:34:47.916 | "\(.module_name) \(.executed)"' 00:34:47.916 06:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:47.916 06:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:47.916 06:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:47.916 06:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:47.916 06:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 301402 00:34:47.916 06:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 301402 ']' 00:34:47.916 06:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 301402 00:34:47.916 06:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:34:47.916 06:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:47.916 06:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 301402 00:34:47.916 06:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:47.916 06:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:47.916 06:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 301402' 00:34:47.916 killing process with pid 301402 00:34:47.916 06:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 301402 00:34:47.916 Received shutdown signal, test time was about 2.000000 seconds 00:34:47.916 00:34:47.916 Latency(us) 00:34:47.916 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:47.916 =================================================================================================================== 00:34:47.916 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:47.916 06:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 301402 00:34:48.854 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:34:48.854 06:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:34:48.854 06:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:48.854 06:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:48.854 06:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:48.854 06:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:48.854 06:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:48.854 06:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:48.854 06:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=302021 00:34:48.854 06:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 302021 /var/tmp/bperf.sock 00:34:48.854 06:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:48.854 06:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 302021 ']' 00:34:48.854 06:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:48.854 06:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:48.854 06:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:48.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:48.854 06:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:48.854 06:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:48.854 [2024-07-26 06:27:00.102389] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:48.854 [2024-07-26 06:27:00.102586] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid302021 ] 00:34:48.854 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:48.854 Zero copy mechanism will not be used. 00:34:48.854 EAL: No free 2048 kB hugepages reported on node 1 00:34:49.112 [2024-07-26 06:27:00.248151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:49.370 [2024-07-26 06:27:00.508861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:49.938 06:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:49.938 06:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:49.938 06:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:49.938 06:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:49.938 06:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:50.504 06:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:50.505 06:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:51.072 nvme0n1 00:34:51.072 06:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:51.072 06:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:51.072 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:51.072 Zero copy mechanism will not be used. 00:34:51.072 Running I/O for 2 seconds... 00:34:52.973 00:34:52.973 Latency(us) 00:34:52.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:52.973 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:52.973 nvme0n1 : 2.00 3609.94 451.24 0.00 0.00 4419.67 3470.98 7670.14 00:34:52.973 =================================================================================================================== 00:34:52.973 Total : 3609.94 451.24 0.00 0.00 4419.67 3470.98 7670.14 00:34:52.973 0 00:34:52.973 06:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:52.973 06:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:52.973 06:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:52.973 06:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:52.973 06:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:52.973 | select(.opcode=="crc32c") 00:34:52.973 | "\(.module_name) \(.executed)"' 00:34:53.231 06:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:53.231 06:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:53.231 06:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:53.231 06:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:53.231 06:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 302021 00:34:53.231 06:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 302021 ']' 00:34:53.231 06:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 302021 00:34:53.231 06:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:34:53.231 06:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:53.231 06:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 302021 00:34:53.231 06:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:53.231 06:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:53.231 06:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 302021' 00:34:53.231 killing process with pid 302021 00:34:53.231 06:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 302021 00:34:53.231 Received shutdown signal, test time was about 2.000000 seconds 00:34:53.231 00:34:53.231 Latency(us) 00:34:53.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:53.231 =================================================================================================================== 00:34:53.231 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:53.231 06:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 302021 00:34:54.607 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:34:54.607 06:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 299977 00:34:54.607 06:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 299977 ']' 00:34:54.607 06:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 299977 00:34:54.607 06:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:34:54.607 06:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:54.607 06:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 299977 00:34:54.607 06:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:54.607 06:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:54.607 06:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 299977' 00:34:54.607 killing process with pid 299977 00:34:54.607 06:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 299977 00:34:54.607 06:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 299977 00:34:55.543 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:34:55.801 00:34:55.801 real 0m24.832s 00:34:55.801 user 0m48.296s 00:34:55.801 sys 0m4.507s 00:34:55.801 06:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:55.801 06:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:55.801 ************************************ 00:34:55.801 END TEST nvmf_digest_clean 00:34:55.801 ************************************ 00:34:55.801 06:27:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:34:55.801 06:27:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:55.801 06:27:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:55.801 06:27:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:55.801 ************************************ 00:34:55.801 START TEST nvmf_digest_error 00:34:55.801 ************************************ 00:34:55.801 06:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:34:55.801 06:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:34:55.801 06:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:55.801 06:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:55.801 06:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:55.801 06:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=302949 00:34:55.801 06:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:55.801 06:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 302949 00:34:55.801 06:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 302949 ']' 00:34:55.801 06:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:55.801 06:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:55.801 06:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:55.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:55.801 06:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:55.801 06:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:55.801 [2024-07-26 06:27:07.080672] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:55.801 [2024-07-26 06:27:07.080822] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:56.059 EAL: No free 2048 kB hugepages reported on node 1 00:34:56.059 [2024-07-26 06:27:07.221656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.316 [2024-07-26 06:27:07.476832] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:56.316 [2024-07-26 06:27:07.476911] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:56.316 [2024-07-26 06:27:07.476956] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:56.316 [2024-07-26 06:27:07.476995] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:56.316 [2024-07-26 06:27:07.477030] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:56.316 [2024-07-26 06:27:07.477115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:56.882 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:56.882 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:34:56.882 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:56.882 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:56.882 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:56.882 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:56.882 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:34:56.882 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.882 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:56.882 [2024-07-26 06:27:08.079532] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:34:56.882 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.882 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:34:56.882 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:34:56.882 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.882 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:57.140 null0 00:34:57.140 [2024-07-26 06:27:08.457170] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:57.398 [2024-07-26 06:27:08.481425] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:57.398 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.398 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:34:57.398 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:57.399 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:57.399 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:57.399 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:57.399 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=303598 00:34:57.399 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:34:57.399 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 303598 /var/tmp/bperf.sock 00:34:57.399 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 303598 ']' 00:34:57.399 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:57.399 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:57.399 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:57.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:57.399 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:57.399 06:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:57.399 [2024-07-26 06:27:08.561814] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:57.399 [2024-07-26 06:27:08.561978] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid303598 ] 00:34:57.399 EAL: No free 2048 kB hugepages reported on node 1 00:34:57.399 [2024-07-26 06:27:08.692727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.657 [2024-07-26 06:27:08.951835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:58.248 06:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:58.248 06:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:34:58.248 06:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:58.248 06:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:58.512 06:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:58.512 06:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.512 06:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:58.512 06:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.512 06:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:58.512 06:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:58.770 nvme0n1 00:34:59.029 06:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:59.029 06:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.029 06:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:59.029 06:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.030 06:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:59.030 06:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:59.030 Running I/O for 2 seconds... 00:34:59.030 [2024-07-26 06:27:10.244537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.030 [2024-07-26 06:27:10.244605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.030 [2024-07-26 06:27:10.244634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.030 [2024-07-26 06:27:10.273104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.030 [2024-07-26 06:27:10.273164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.030 [2024-07-26 06:27:10.273190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.030 [2024-07-26 06:27:10.295467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.030 [2024-07-26 06:27:10.295517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.030 [2024-07-26 06:27:10.295547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.030 [2024-07-26 06:27:10.319663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.030 [2024-07-26 06:27:10.319712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.030 [2024-07-26 06:27:10.319742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.030 [2024-07-26 06:27:10.342569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.030 [2024-07-26 06:27:10.342618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.030 [2024-07-26 06:27:10.342646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.290 [2024-07-26 06:27:10.369459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.290 [2024-07-26 06:27:10.369509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.290 [2024-07-26 06:27:10.369538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.290 [2024-07-26 06:27:10.386087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.290 [2024-07-26 06:27:10.386148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.290 [2024-07-26 06:27:10.386173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.290 [2024-07-26 06:27:10.411347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.291 [2024-07-26 06:27:10.411410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.291 [2024-07-26 06:27:10.411452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.291 [2024-07-26 06:27:10.436510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.291 [2024-07-26 06:27:10.436560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.291 [2024-07-26 06:27:10.436589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.291 [2024-07-26 06:27:10.463380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.291 [2024-07-26 06:27:10.463437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.291 [2024-07-26 06:27:10.463468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.291 [2024-07-26 06:27:10.488587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.291 [2024-07-26 06:27:10.488627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.291 [2024-07-26 06:27:10.488651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.291 [2024-07-26 06:27:10.512272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.291 [2024-07-26 06:27:10.512327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.291 [2024-07-26 06:27:10.512352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.291 [2024-07-26 06:27:10.531498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.291 [2024-07-26 06:27:10.531539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.291 [2024-07-26 06:27:10.531564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.291 [2024-07-26 06:27:10.555746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.291 [2024-07-26 06:27:10.555811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.291 [2024-07-26 06:27:10.555842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.291 [2024-07-26 06:27:10.579420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.291 [2024-07-26 06:27:10.579471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.291 [2024-07-26 06:27:10.579500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.291 [2024-07-26 06:27:10.603766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.291 [2024-07-26 06:27:10.603816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.291 [2024-07-26 06:27:10.603846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.291 [2024-07-26 06:27:10.622639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.291 [2024-07-26 06:27:10.622680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.291 [2024-07-26 06:27:10.622704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.551 [2024-07-26 06:27:10.646368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.551 [2024-07-26 06:27:10.646419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.551 [2024-07-26 06:27:10.646448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.551 [2024-07-26 06:27:10.670909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.551 [2024-07-26 06:27:10.670952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.551 [2024-07-26 06:27:10.670993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.551 [2024-07-26 06:27:10.694559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.551 [2024-07-26 06:27:10.694608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.551 [2024-07-26 06:27:10.694637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.551 [2024-07-26 06:27:10.716519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.551 [2024-07-26 06:27:10.716578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.551 [2024-07-26 06:27:10.716607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.551 [2024-07-26 06:27:10.734590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.551 [2024-07-26 06:27:10.734639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.551 [2024-07-26 06:27:10.734668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.551 [2024-07-26 06:27:10.756504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.551 [2024-07-26 06:27:10.756554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.551 [2024-07-26 06:27:10.756583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.551 [2024-07-26 06:27:10.781514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.551 [2024-07-26 06:27:10.781563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.551 [2024-07-26 06:27:10.781593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.551 [2024-07-26 06:27:10.807930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.551 [2024-07-26 06:27:10.807980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.551 [2024-07-26 06:27:10.808009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.551 [2024-07-26 06:27:10.832611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.551 [2024-07-26 06:27:10.832662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.551 [2024-07-26 06:27:10.832692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.552 [2024-07-26 06:27:10.855767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.552 [2024-07-26 06:27:10.855807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.552 [2024-07-26 06:27:10.855838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.552 [2024-07-26 06:27:10.872177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.552 [2024-07-26 06:27:10.872216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.552 [2024-07-26 06:27:10.872240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.812 [2024-07-26 06:27:10.896956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.812 [2024-07-26 06:27:10.897005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.812 [2024-07-26 06:27:10.897035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.812 [2024-07-26 06:27:10.923626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.812 [2024-07-26 06:27:10.923667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.812 [2024-07-26 06:27:10.923690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.812 [2024-07-26 06:27:10.948705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.812 [2024-07-26 06:27:10.948755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.812 [2024-07-26 06:27:10.948784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.812 [2024-07-26 06:27:10.972946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.812 [2024-07-26 06:27:10.972994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.812 [2024-07-26 06:27:10.973023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.812 [2024-07-26 06:27:10.997713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.812 [2024-07-26 06:27:10.997761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.812 [2024-07-26 06:27:10.997791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.812 [2024-07-26 06:27:11.014053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.812 [2024-07-26 06:27:11.014124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.812 [2024-07-26 06:27:11.014167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.812 [2024-07-26 06:27:11.036795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.812 [2024-07-26 06:27:11.036845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.812 [2024-07-26 06:27:11.036875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.812 [2024-07-26 06:27:11.055854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.812 [2024-07-26 06:27:11.055904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.812 [2024-07-26 06:27:11.055934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.812 [2024-07-26 06:27:11.079027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.812 [2024-07-26 06:27:11.079086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.812 [2024-07-26 06:27:11.079136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.812 [2024-07-26 06:27:11.105721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.813 [2024-07-26 06:27:11.105772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.813 [2024-07-26 06:27:11.105801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.813 [2024-07-26 06:27:11.128724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:34:59.813 [2024-07-26 06:27:11.128773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.813 [2024-07-26 06:27:11.128802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.072 [2024-07-26 06:27:11.146375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.072 [2024-07-26 06:27:11.146417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.072 [2024-07-26 06:27:11.146443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.072 [2024-07-26 06:27:11.165397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.072 [2024-07-26 06:27:11.165440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.072 [2024-07-26 06:27:11.165464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.072 [2024-07-26 06:27:11.183688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.072 [2024-07-26 06:27:11.183742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.072 [2024-07-26 06:27:11.183772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.072 [2024-07-26 06:27:11.210424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.072 [2024-07-26 06:27:11.210475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.072 [2024-07-26 06:27:11.210500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.072 [2024-07-26 06:27:11.232957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.072 [2024-07-26 06:27:11.233017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.072 [2024-07-26 06:27:11.233052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.072 [2024-07-26 06:27:11.254312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.072 [2024-07-26 06:27:11.254369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.072 [2024-07-26 06:27:11.254394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.072 [2024-07-26 06:27:11.281225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.072 [2024-07-26 06:27:11.281266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.072 [2024-07-26 06:27:11.281290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.072 [2024-07-26 06:27:11.304889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.072 [2024-07-26 06:27:11.304929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.072 [2024-07-26 06:27:11.304954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.072 [2024-07-26 06:27:11.324308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.072 [2024-07-26 06:27:11.324358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.072 [2024-07-26 06:27:11.324387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.072 [2024-07-26 06:27:11.348477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.072 [2024-07-26 06:27:11.348518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.072 [2024-07-26 06:27:11.348542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.072 [2024-07-26 06:27:11.373238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.072 [2024-07-26 06:27:11.373278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.072 [2024-07-26 06:27:11.373873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.072 [2024-07-26 06:27:11.396124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.072 [2024-07-26 06:27:11.396166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.072 [2024-07-26 06:27:11.396191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.331 [2024-07-26 06:27:11.422290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.331 [2024-07-26 06:27:11.422339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.331 [2024-07-26 06:27:11.422369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.331 [2024-07-26 06:27:11.439248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.331 [2024-07-26 06:27:11.439290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.331 [2024-07-26 06:27:11.439315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.331 [2024-07-26 06:27:11.463791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.331 [2024-07-26 06:27:11.463831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.331 [2024-07-26 06:27:11.463855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.331 [2024-07-26 06:27:11.484077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.331 [2024-07-26 06:27:11.484118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.331 [2024-07-26 06:27:11.484161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.331 [2024-07-26 06:27:11.503476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.331 [2024-07-26 06:27:11.503525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.331 [2024-07-26 06:27:11.503555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.331 [2024-07-26 06:27:11.528726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.331 [2024-07-26 06:27:11.528775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.331 [2024-07-26 06:27:11.528805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.331 [2024-07-26 06:27:11.550256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.331 [2024-07-26 06:27:11.550297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.331 [2024-07-26 06:27:11.550321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.331 [2024-07-26 06:27:11.572524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.331 [2024-07-26 06:27:11.572573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.331 [2024-07-26 06:27:11.572602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.331 [2024-07-26 06:27:11.600229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.331 [2024-07-26 06:27:11.600275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.331 [2024-07-26 06:27:11.600302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.331 [2024-07-26 06:27:11.627453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.331 [2024-07-26 06:27:11.627504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.331 [2024-07-26 06:27:11.627543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.331 [2024-07-26 06:27:11.649662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.331 [2024-07-26 06:27:11.649703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.331 [2024-07-26 06:27:11.649727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.589 [2024-07-26 06:27:11.667616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.589 [2024-07-26 06:27:11.667666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.589 [2024-07-26 06:27:11.667694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.589 [2024-07-26 06:27:11.694489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.589 [2024-07-26 06:27:11.694540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.589 [2024-07-26 06:27:11.694569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.589 [2024-07-26 06:27:11.717043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.589 [2024-07-26 06:27:11.717100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.589 [2024-07-26 06:27:11.717143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.589 [2024-07-26 06:27:11.742101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.589 [2024-07-26 06:27:11.742159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.589 [2024-07-26 06:27:11.742183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.589 [2024-07-26 06:27:11.764601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.589 [2024-07-26 06:27:11.764641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.589 [2024-07-26 06:27:11.764665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.589 [2024-07-26 06:27:11.787731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.589 [2024-07-26 06:27:11.787780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.589 [2024-07-26 06:27:11.787809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.589 [2024-07-26 06:27:11.806863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.589 [2024-07-26 06:27:11.806911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.589 [2024-07-26 06:27:11.806941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.589 [2024-07-26 06:27:11.829280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.589 [2024-07-26 06:27:11.829323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.589 [2024-07-26 06:27:11.829347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.589 [2024-07-26 06:27:11.854783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.589 [2024-07-26 06:27:11.854834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.589 [2024-07-26 06:27:11.854864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.589 [2024-07-26 06:27:11.877903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.590 [2024-07-26 06:27:11.877952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:52 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.590 [2024-07-26 06:27:11.877982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.590 [2024-07-26 06:27:11.903862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.590 [2024-07-26 06:27:11.903911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.590 [2024-07-26 06:27:11.903940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.848 [2024-07-26 06:27:11.930337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.848 [2024-07-26 06:27:11.930400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.848 [2024-07-26 06:27:11.930429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.848 [2024-07-26 06:27:11.953791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.848 [2024-07-26 06:27:11.953831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.848 [2024-07-26 06:27:11.953855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.848 [2024-07-26 06:27:11.971480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.848 [2024-07-26 06:27:11.971528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.848 [2024-07-26 06:27:11.971557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.848 [2024-07-26 06:27:11.994326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.848 [2024-07-26 06:27:11.994382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.848 [2024-07-26 06:27:11.994406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.848 [2024-07-26 06:27:12.018482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.848 [2024-07-26 06:27:12.018532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.848 [2024-07-26 06:27:12.018568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.848 [2024-07-26 06:27:12.043864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.848 [2024-07-26 06:27:12.043905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.848 [2024-07-26 06:27:12.043929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.848 [2024-07-26 06:27:12.068866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.848 [2024-07-26 06:27:12.068915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.848 [2024-07-26 06:27:12.068944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.848 [2024-07-26 06:27:12.093524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.848 [2024-07-26 06:27:12.093574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.848 [2024-07-26 06:27:12.093604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.848 [2024-07-26 06:27:12.112243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.848 [2024-07-26 06:27:12.112285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.848 [2024-07-26 06:27:12.112310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.848 [2024-07-26 06:27:12.136172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.848 [2024-07-26 06:27:12.136212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.848 [2024-07-26 06:27:12.136236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.848 [2024-07-26 06:27:12.160778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:00.848 [2024-07-26 06:27:12.160829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.848 [2024-07-26 06:27:12.160858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.106 [2024-07-26 06:27:12.187588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:01.106 [2024-07-26 06:27:12.187651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.106 [2024-07-26 06:27:12.187681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.106 [2024-07-26 06:27:12.212313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:01.106 [2024-07-26 06:27:12.212376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.106 [2024-07-26 06:27:12.212400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.106 00:35:01.106 Latency(us) 00:35:01.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:01.106 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:01.106 nvme0n1 : 2.05 10815.51 42.25 0.00 0.00 11585.41 5898.24 51263.72 00:35:01.106 =================================================================================================================== 00:35:01.106 Total : 10815.51 42.25 0.00 0.00 11585.41 5898.24 51263.72 00:35:01.106 0 00:35:01.106 06:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:01.106 06:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:01.106 06:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:01.106 06:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:01.106 | .driver_specific 00:35:01.106 | .nvme_error 00:35:01.106 | .status_code 00:35:01.106 | .command_transient_transport_error' 00:35:01.366 06:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 86 > 0 )) 00:35:01.366 06:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 303598 00:35:01.366 06:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 303598 ']' 00:35:01.366 06:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 303598 00:35:01.366 06:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:01.366 06:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:01.366 06:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 303598 00:35:01.366 06:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:01.366 06:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:01.366 06:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 303598' 00:35:01.366 killing process with pid 303598 00:35:01.366 06:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 303598 00:35:01.366 Received shutdown signal, test time was about 2.000000 seconds 00:35:01.366 00:35:01.366 Latency(us) 00:35:01.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:01.366 =================================================================================================================== 00:35:01.366 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:01.366 06:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 303598 00:35:02.303 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:35:02.303 06:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:02.303 06:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:02.303 06:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:02.303 06:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:02.303 06:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:02.303 06:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=304271 00:35:02.303 06:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:02.303 06:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 304271 /var/tmp/bperf.sock 00:35:02.303 06:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 304271 ']' 00:35:02.303 06:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:02.303 06:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:02.303 06:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:02.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:02.303 06:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:02.303 06:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:02.561 [2024-07-26 06:27:13.691474] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:02.561 [2024-07-26 06:27:13.691616] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid304271 ] 00:35:02.561 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:02.561 Zero copy mechanism will not be used. 00:35:02.561 EAL: No free 2048 kB hugepages reported on node 1 00:35:02.561 [2024-07-26 06:27:13.813952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.820 [2024-07-26 06:27:14.053229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:03.386 06:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:03.386 06:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:03.386 06:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:03.387 06:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:03.645 06:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:03.645 06:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.645 06:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:03.645 06:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.645 06:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:03.645 06:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:04.212 nvme0n1 00:35:04.212 06:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:04.212 06:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.213 06:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:04.213 06:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.213 06:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:04.213 06:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:04.473 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:04.473 Zero copy mechanism will not be used. 00:35:04.473 Running I/O for 2 seconds... 00:35:04.473 [2024-07-26 06:27:15.580764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.473 [2024-07-26 06:27:15.580843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.473 [2024-07-26 06:27:15.580875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.473 [2024-07-26 06:27:15.590252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.473 [2024-07-26 06:27:15.590295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.473 [2024-07-26 06:27:15.590320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.473 [2024-07-26 06:27:15.599339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.473 [2024-07-26 06:27:15.599380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.473 [2024-07-26 06:27:15.599405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.473 [2024-07-26 06:27:15.608299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.473 [2024-07-26 06:27:15.608340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.473 [2024-07-26 06:27:15.608370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.473 [2024-07-26 06:27:15.617324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.473 [2024-07-26 06:27:15.617364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.473 [2024-07-26 06:27:15.617389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.473 [2024-07-26 06:27:15.626541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.473 [2024-07-26 06:27:15.626582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.473 [2024-07-26 06:27:15.626608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.473 [2024-07-26 06:27:15.635414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.473 [2024-07-26 06:27:15.635455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.473 [2024-07-26 06:27:15.635480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.473 [2024-07-26 06:27:15.644332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.473 [2024-07-26 06:27:15.644373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.473 [2024-07-26 06:27:15.644398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.473 [2024-07-26 06:27:15.653409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.473 [2024-07-26 06:27:15.653448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.473 [2024-07-26 06:27:15.653473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.473 [2024-07-26 06:27:15.662387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.473 [2024-07-26 06:27:15.662428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.473 [2024-07-26 06:27:15.662452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.473 [2024-07-26 06:27:15.671540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.473 [2024-07-26 06:27:15.671579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.474 [2024-07-26 06:27:15.671604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.474 [2024-07-26 06:27:15.680545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.474 [2024-07-26 06:27:15.680600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.474 [2024-07-26 06:27:15.680625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.474 [2024-07-26 06:27:15.689529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.474 [2024-07-26 06:27:15.689569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.474 [2024-07-26 06:27:15.689594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.474 [2024-07-26 06:27:15.698548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.474 [2024-07-26 06:27:15.698588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.474 [2024-07-26 06:27:15.698612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.474 [2024-07-26 06:27:15.707320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.474 [2024-07-26 06:27:15.707359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.474 [2024-07-26 06:27:15.707383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.474 [2024-07-26 06:27:15.716358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.474 [2024-07-26 06:27:15.716399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.474 [2024-07-26 06:27:15.716424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.474 [2024-07-26 06:27:15.725540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.474 [2024-07-26 06:27:15.725580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.474 [2024-07-26 06:27:15.725614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.474 [2024-07-26 06:27:15.734728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.474 [2024-07-26 06:27:15.734767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.474 [2024-07-26 06:27:15.734791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.474 [2024-07-26 06:27:15.743717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.474 [2024-07-26 06:27:15.743756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.474 [2024-07-26 06:27:15.743779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.474 [2024-07-26 06:27:15.752652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.474 [2024-07-26 06:27:15.752692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.474 [2024-07-26 06:27:15.752716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.474 [2024-07-26 06:27:15.761520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.474 [2024-07-26 06:27:15.761559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.474 [2024-07-26 06:27:15.761583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.474 [2024-07-26 06:27:15.770445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.474 [2024-07-26 06:27:15.770487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.474 [2024-07-26 06:27:15.770512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.474 [2024-07-26 06:27:15.779474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.474 [2024-07-26 06:27:15.779514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.474 [2024-07-26 06:27:15.779538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.474 [2024-07-26 06:27:15.788357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.474 [2024-07-26 06:27:15.788398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.474 [2024-07-26 06:27:15.788423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.474 [2024-07-26 06:27:15.797255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.474 [2024-07-26 06:27:15.797295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.474 [2024-07-26 06:27:15.797320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.733 [2024-07-26 06:27:15.806292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.733 [2024-07-26 06:27:15.806345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.733 [2024-07-26 06:27:15.806389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:15.815446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.734 [2024-07-26 06:27:15.815487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.734 [2024-07-26 06:27:15.815512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:15.824208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.734 [2024-07-26 06:27:15.824257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.734 [2024-07-26 06:27:15.824285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:15.833862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.734 [2024-07-26 06:27:15.833904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.734 [2024-07-26 06:27:15.833928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:15.842922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.734 [2024-07-26 06:27:15.842964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.734 [2024-07-26 06:27:15.842989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:15.852053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.734 [2024-07-26 06:27:15.852106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.734 [2024-07-26 06:27:15.852133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:15.861114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.734 [2024-07-26 06:27:15.861154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.734 [2024-07-26 06:27:15.861179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:15.870157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.734 [2024-07-26 06:27:15.870197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.734 [2024-07-26 06:27:15.870221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:15.879137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.734 [2024-07-26 06:27:15.879181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.734 [2024-07-26 06:27:15.879214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:15.888225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.734 [2024-07-26 06:27:15.888270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.734 [2024-07-26 06:27:15.888295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:15.897204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.734 [2024-07-26 06:27:15.897246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.734 [2024-07-26 06:27:15.897273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:15.906510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.734 [2024-07-26 06:27:15.906550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.734 [2024-07-26 06:27:15.906574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:15.915588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.734 [2024-07-26 06:27:15.915629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.734 [2024-07-26 06:27:15.915654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:15.924488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.734 [2024-07-26 06:27:15.924527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.734 [2024-07-26 06:27:15.924552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:15.933333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.734 [2024-07-26 06:27:15.933374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.734 [2024-07-26 06:27:15.933398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:15.942372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.734 [2024-07-26 06:27:15.942412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.734 [2024-07-26 06:27:15.942436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:15.951469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.734 [2024-07-26 06:27:15.951509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.734 [2024-07-26 06:27:15.951533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:15.960499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.734 [2024-07-26 06:27:15.960547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.734 [2024-07-26 06:27:15.960572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:15.969247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.734 [2024-07-26 06:27:15.969286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.734 [2024-07-26 06:27:15.969310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:15.978314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.734 [2024-07-26 06:27:15.978354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.734 [2024-07-26 06:27:15.978378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:15.987350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.734 [2024-07-26 06:27:15.987389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.734 [2024-07-26 06:27:15.987413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:15.996458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.734 [2024-07-26 06:27:15.996497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.734 [2024-07-26 06:27:15.996522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:16.005387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.734 [2024-07-26 06:27:16.005426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.734 [2024-07-26 06:27:16.005451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:16.014991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.734 [2024-07-26 06:27:16.015032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.734 [2024-07-26 06:27:16.015057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:16.023884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.734 [2024-07-26 06:27:16.023924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.734 [2024-07-26 06:27:16.023948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:16.033055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.734 [2024-07-26 06:27:16.033105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.734 [2024-07-26 06:27:16.033136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:16.042217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.734 [2024-07-26 06:27:16.042257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.734 [2024-07-26 06:27:16.042281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.734 [2024-07-26 06:27:16.051322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.735 [2024-07-26 06:27:16.051362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.735 [2024-07-26 06:27:16.051386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.735 [2024-07-26 06:27:16.060297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:04.735 [2024-07-26 06:27:16.060336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.735 [2024-07-26 06:27:16.060361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.069751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.069794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.069819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.079025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.079091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.079134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.088004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.088045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.088080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.096959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.097001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.097026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.106016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.106056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.106097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.115100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.115150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.115177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.123994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.124033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.124057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.132892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.132932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.132957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.141929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.141969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.141994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.150796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.150851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.150876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.159722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.159764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.159790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.168530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.168570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.168594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.177747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.177788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.177812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.186618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.186659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.186694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.195683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.195725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.195751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.204557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.204597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.204621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.213493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.213532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.213557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.222560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.222601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.222625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.231421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.231461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.231486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.240252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.240292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.240316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.249553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.249594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.249618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.258487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.258527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.258552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.267446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.267494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.267520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.276533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.276572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.276597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.285556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.285595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.285619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.294667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.294707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.294731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.303821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.303861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.303886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.313297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.313339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.313363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.322496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.322536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.322560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.005 [2024-07-26 06:27:16.331646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.005 [2024-07-26 06:27:16.331686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.005 [2024-07-26 06:27:16.331711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.264 [2024-07-26 06:27:16.340858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.264 [2024-07-26 06:27:16.340899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.264 [2024-07-26 06:27:16.340947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.264 [2024-07-26 06:27:16.350131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.264 [2024-07-26 06:27:16.350174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.264 [2024-07-26 06:27:16.350199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.264 [2024-07-26 06:27:16.359068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.264 [2024-07-26 06:27:16.359110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.264 [2024-07-26 06:27:16.359134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.264 [2024-07-26 06:27:16.368284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.264 [2024-07-26 06:27:16.368325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.264 [2024-07-26 06:27:16.368350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.264 [2024-07-26 06:27:16.377120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.264 [2024-07-26 06:27:16.377160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.264 [2024-07-26 06:27:16.377185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.264 [2024-07-26 06:27:16.386236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.264 [2024-07-26 06:27:16.386276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.264 [2024-07-26 06:27:16.386300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.264 [2024-07-26 06:27:16.395377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.264 [2024-07-26 06:27:16.395417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.264 [2024-07-26 06:27:16.395442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.264 [2024-07-26 06:27:16.404480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.264 [2024-07-26 06:27:16.404519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.264 [2024-07-26 06:27:16.404544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.264 [2024-07-26 06:27:16.413428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.264 [2024-07-26 06:27:16.413467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.264 [2024-07-26 06:27:16.413492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.264 [2024-07-26 06:27:16.422444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.264 [2024-07-26 06:27:16.422492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.264 [2024-07-26 06:27:16.422517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.264 [2024-07-26 06:27:16.431407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.264 [2024-07-26 06:27:16.431447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.264 [2024-07-26 06:27:16.431472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.264 [2024-07-26 06:27:16.440334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.264 [2024-07-26 06:27:16.440373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.264 [2024-07-26 06:27:16.440398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.264 [2024-07-26 06:27:16.449325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.264 [2024-07-26 06:27:16.449365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.264 [2024-07-26 06:27:16.449405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.264 [2024-07-26 06:27:16.458208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.264 [2024-07-26 06:27:16.458248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.264 [2024-07-26 06:27:16.458272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.264 [2024-07-26 06:27:16.467026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.264 [2024-07-26 06:27:16.467073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.264 [2024-07-26 06:27:16.467100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.264 [2024-07-26 06:27:16.476032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.264 [2024-07-26 06:27:16.476080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.264 [2024-07-26 06:27:16.476106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.264 [2024-07-26 06:27:16.484910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.264 [2024-07-26 06:27:16.484950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.264 [2024-07-26 06:27:16.484975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.264 [2024-07-26 06:27:16.494007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.264 [2024-07-26 06:27:16.494047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.264 [2024-07-26 06:27:16.494091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.264 [2024-07-26 06:27:16.502988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.264 [2024-07-26 06:27:16.503028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.264 [2024-07-26 06:27:16.503053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.264 [2024-07-26 06:27:16.511997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.264 [2024-07-26 06:27:16.512037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.264 [2024-07-26 06:27:16.512071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.264 [2024-07-26 06:27:16.521125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.264 [2024-07-26 06:27:16.521166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.264 [2024-07-26 06:27:16.521191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.264 [2024-07-26 06:27:16.530114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.265 [2024-07-26 06:27:16.530153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.265 [2024-07-26 06:27:16.530177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.265 [2024-07-26 06:27:16.539088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.265 [2024-07-26 06:27:16.539128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.265 [2024-07-26 06:27:16.539153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.265 [2024-07-26 06:27:16.548074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.265 [2024-07-26 06:27:16.548114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.265 [2024-07-26 06:27:16.548138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.265 [2024-07-26 06:27:16.556969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.265 [2024-07-26 06:27:16.557008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.265 [2024-07-26 06:27:16.557032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.265 [2024-07-26 06:27:16.566026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.265 [2024-07-26 06:27:16.566074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.265 [2024-07-26 06:27:16.566101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.265 [2024-07-26 06:27:16.574837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.265 [2024-07-26 06:27:16.574884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.265 [2024-07-26 06:27:16.574909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.265 [2024-07-26 06:27:16.584344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.265 [2024-07-26 06:27:16.584385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.265 [2024-07-26 06:27:16.584411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.265 [2024-07-26 06:27:16.595161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.265 [2024-07-26 06:27:16.595218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.265 [2024-07-26 06:27:16.595244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.523 [2024-07-26 06:27:16.606825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.523 [2024-07-26 06:27:16.606867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.523 [2024-07-26 06:27:16.606892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.523 [2024-07-26 06:27:16.618694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.523 [2024-07-26 06:27:16.618734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.523 [2024-07-26 06:27:16.618758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.523 [2024-07-26 06:27:16.653141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.523 [2024-07-26 06:27:16.653227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.523 [2024-07-26 06:27:16.653256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.523 [2024-07-26 06:27:16.665307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.523 [2024-07-26 06:27:16.665364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.523 [2024-07-26 06:27:16.665391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.523 [2024-07-26 06:27:16.676986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.523 [2024-07-26 06:27:16.677030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.523 [2024-07-26 06:27:16.677055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.523 [2024-07-26 06:27:16.687821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.523 [2024-07-26 06:27:16.687865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.523 [2024-07-26 06:27:16.687897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.523 [2024-07-26 06:27:16.698699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.523 [2024-07-26 06:27:16.698742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.523 [2024-07-26 06:27:16.698768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.523 [2024-07-26 06:27:16.709456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.523 [2024-07-26 06:27:16.709497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.523 [2024-07-26 06:27:16.709522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.523 [2024-07-26 06:27:16.721249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.523 [2024-07-26 06:27:16.721291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.523 [2024-07-26 06:27:16.721316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.523 [2024-07-26 06:27:16.732319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.523 [2024-07-26 06:27:16.732388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.523 [2024-07-26 06:27:16.732414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.523 [2024-07-26 06:27:16.743611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.523 [2024-07-26 06:27:16.743654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.523 [2024-07-26 06:27:16.743698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.523 [2024-07-26 06:27:16.752808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.523 [2024-07-26 06:27:16.752851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.523 [2024-07-26 06:27:16.752877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.523 [2024-07-26 06:27:16.762979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.523 [2024-07-26 06:27:16.763024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.523 [2024-07-26 06:27:16.763070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.523 [2024-07-26 06:27:16.773167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.523 [2024-07-26 06:27:16.773209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.524 [2024-07-26 06:27:16.773234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.524 [2024-07-26 06:27:16.782450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.524 [2024-07-26 06:27:16.782503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.524 [2024-07-26 06:27:16.782530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.524 [2024-07-26 06:27:16.793166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.524 [2024-07-26 06:27:16.793210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.524 [2024-07-26 06:27:16.793251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.524 [2024-07-26 06:27:16.803741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.524 [2024-07-26 06:27:16.803786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.524 [2024-07-26 06:27:16.803828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.524 [2024-07-26 06:27:16.813831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.524 [2024-07-26 06:27:16.813876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.524 [2024-07-26 06:27:16.813902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.524 [2024-07-26 06:27:16.824520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.524 [2024-07-26 06:27:16.824563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.524 [2024-07-26 06:27:16.824588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.524 [2024-07-26 06:27:16.834506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.524 [2024-07-26 06:27:16.834549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.524 [2024-07-26 06:27:16.834575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.524 [2024-07-26 06:27:16.844555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.524 [2024-07-26 06:27:16.844601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.524 [2024-07-26 06:27:16.844628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.524 [2024-07-26 06:27:16.854870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.524 [2024-07-26 06:27:16.854921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.524 [2024-07-26 06:27:16.854949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.782 [2024-07-26 06:27:16.865147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.782 [2024-07-26 06:27:16.865206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.782 [2024-07-26 06:27:16.865240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.782 [2024-07-26 06:27:16.874079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.782 [2024-07-26 06:27:16.874121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.782 [2024-07-26 06:27:16.874147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.782 [2024-07-26 06:27:16.882866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.782 [2024-07-26 06:27:16.882909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.782 [2024-07-26 06:27:16.882936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.782 [2024-07-26 06:27:16.891866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.782 [2024-07-26 06:27:16.891926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.782 [2024-07-26 06:27:16.891952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.782 [2024-07-26 06:27:16.900914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.782 [2024-07-26 06:27:16.900972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.782 [2024-07-26 06:27:16.900999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.782 [2024-07-26 06:27:16.910216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.782 [2024-07-26 06:27:16.910259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.782 [2024-07-26 06:27:16.910286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.782 [2024-07-26 06:27:16.919127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.782 [2024-07-26 06:27:16.919170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.782 [2024-07-26 06:27:16.919197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.782 [2024-07-26 06:27:16.928580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.782 [2024-07-26 06:27:16.928623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.782 [2024-07-26 06:27:16.928650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.782 [2024-07-26 06:27:16.937699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.782 [2024-07-26 06:27:16.937741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.782 [2024-07-26 06:27:16.937782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.783 [2024-07-26 06:27:16.946510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.783 [2024-07-26 06:27:16.946562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.783 [2024-07-26 06:27:16.946590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.783 [2024-07-26 06:27:16.955329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.783 [2024-07-26 06:27:16.955382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.783 [2024-07-26 06:27:16.955408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.783 [2024-07-26 06:27:16.964454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.783 [2024-07-26 06:27:16.964497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.783 [2024-07-26 06:27:16.964538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.783 [2024-07-26 06:27:16.973376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.783 [2024-07-26 06:27:16.973418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.783 [2024-07-26 06:27:16.973452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.783 [2024-07-26 06:27:16.982255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.783 [2024-07-26 06:27:16.982298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.783 [2024-07-26 06:27:16.982325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.783 [2024-07-26 06:27:16.991033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.783 [2024-07-26 06:27:16.991086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.783 [2024-07-26 06:27:16.991114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.783 [2024-07-26 06:27:16.999860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.783 [2024-07-26 06:27:16.999902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.783 [2024-07-26 06:27:16.999928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.783 [2024-07-26 06:27:17.008697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.783 [2024-07-26 06:27:17.008741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.783 [2024-07-26 06:27:17.008766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.783 [2024-07-26 06:27:17.017821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.783 [2024-07-26 06:27:17.017863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.783 [2024-07-26 06:27:17.017904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.783 [2024-07-26 06:27:17.026933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.783 [2024-07-26 06:27:17.026989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.783 [2024-07-26 06:27:17.027015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.783 [2024-07-26 06:27:17.035945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.783 [2024-07-26 06:27:17.035987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.783 [2024-07-26 06:27:17.036014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.783 [2024-07-26 06:27:17.044775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.783 [2024-07-26 06:27:17.044816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.783 [2024-07-26 06:27:17.044843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.783 [2024-07-26 06:27:17.053690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.783 [2024-07-26 06:27:17.053733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.783 [2024-07-26 06:27:17.053760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.783 [2024-07-26 06:27:17.062752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.783 [2024-07-26 06:27:17.062810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.783 [2024-07-26 06:27:17.062851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.783 [2024-07-26 06:27:17.071652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.783 [2024-07-26 06:27:17.071703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.783 [2024-07-26 06:27:17.071746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:05.783 [2024-07-26 06:27:17.080655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.783 [2024-07-26 06:27:17.080700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.783 [2024-07-26 06:27:17.080726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.783 [2024-07-26 06:27:17.089745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.783 [2024-07-26 06:27:17.089803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.783 [2024-07-26 06:27:17.089828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.783 [2024-07-26 06:27:17.098835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.783 [2024-07-26 06:27:17.098910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.783 [2024-07-26 06:27:17.098951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:05.783 [2024-07-26 06:27:17.107803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:05.783 [2024-07-26 06:27:17.107846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.783 [2024-07-26 06:27:17.107889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.043 [2024-07-26 06:27:17.116848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.043 [2024-07-26 06:27:17.116906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.043 [2024-07-26 06:27:17.116932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.043 [2024-07-26 06:27:17.126284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.043 [2024-07-26 06:27:17.126329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.043 [2024-07-26 06:27:17.126371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.043 [2024-07-26 06:27:17.136354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.043 [2024-07-26 06:27:17.136428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.043 [2024-07-26 06:27:17.136456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.043 [2024-07-26 06:27:17.146993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.043 [2024-07-26 06:27:17.147039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.043 [2024-07-26 06:27:17.147081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.043 [2024-07-26 06:27:17.156233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.043 [2024-07-26 06:27:17.156276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.043 [2024-07-26 06:27:17.156302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.043 [2024-07-26 06:27:17.165127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.043 [2024-07-26 06:27:17.165173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.043 [2024-07-26 06:27:17.165215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.043 [2024-07-26 06:27:17.174041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.043 [2024-07-26 06:27:17.174091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.043 [2024-07-26 06:27:17.174141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.043 [2024-07-26 06:27:17.182942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.043 [2024-07-26 06:27:17.182984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.043 [2024-07-26 06:27:17.183024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.043 [2024-07-26 06:27:17.191909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.043 [2024-07-26 06:27:17.191950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.043 [2024-07-26 06:27:17.191976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.043 [2024-07-26 06:27:17.201285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.043 [2024-07-26 06:27:17.201327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.043 [2024-07-26 06:27:17.201368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.043 [2024-07-26 06:27:17.209943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.043 [2024-07-26 06:27:17.209985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.043 [2024-07-26 06:27:17.210010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.043 [2024-07-26 06:27:17.218828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.043 [2024-07-26 06:27:17.218868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.043 [2024-07-26 06:27:17.218894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.043 [2024-07-26 06:27:17.227677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.043 [2024-07-26 06:27:17.227720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.043 [2024-07-26 06:27:17.227761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.043 [2024-07-26 06:27:17.236421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.043 [2024-07-26 06:27:17.236478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.043 [2024-07-26 06:27:17.236504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.043 [2024-07-26 06:27:17.245276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.043 [2024-07-26 06:27:17.245318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.043 [2024-07-26 06:27:17.245360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.043 [2024-07-26 06:27:17.254541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.043 [2024-07-26 06:27:17.254583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.043 [2024-07-26 06:27:17.254633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.043 [2024-07-26 06:27:17.263605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.043 [2024-07-26 06:27:17.263664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.043 [2024-07-26 06:27:17.263691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.043 [2024-07-26 06:27:17.272710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.043 [2024-07-26 06:27:17.272752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.043 [2024-07-26 06:27:17.272778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.043 [2024-07-26 06:27:17.281811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.043 [2024-07-26 06:27:17.281852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.043 [2024-07-26 06:27:17.281878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.043 [2024-07-26 06:27:17.290751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.043 [2024-07-26 06:27:17.290793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.043 [2024-07-26 06:27:17.290819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.043 [2024-07-26 06:27:17.299725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.043 [2024-07-26 06:27:17.299768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.043 [2024-07-26 06:27:17.299794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.043 [2024-07-26 06:27:17.308668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.043 [2024-07-26 06:27:17.308711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.043 [2024-07-26 06:27:17.308751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.043 [2024-07-26 06:27:17.317578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.043 [2024-07-26 06:27:17.317620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.043 [2024-07-26 06:27:17.317644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.043 [2024-07-26 06:27:17.326432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.043 [2024-07-26 06:27:17.326473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.043 [2024-07-26 06:27:17.326499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.043 [2024-07-26 06:27:17.335694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.043 [2024-07-26 06:27:17.335755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.043 [2024-07-26 06:27:17.335780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.043 [2024-07-26 06:27:17.344598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.043 [2024-07-26 06:27:17.344640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.044 [2024-07-26 06:27:17.344665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.044 [2024-07-26 06:27:17.353475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.044 [2024-07-26 06:27:17.353517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.044 [2024-07-26 06:27:17.353542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.044 [2024-07-26 06:27:17.362456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.044 [2024-07-26 06:27:17.362498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.044 [2024-07-26 06:27:17.362524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.044 [2024-07-26 06:27:17.371487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.044 [2024-07-26 06:27:17.371530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.044 [2024-07-26 06:27:17.371556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.302 [2024-07-26 06:27:17.380695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.302 [2024-07-26 06:27:17.380740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.302 [2024-07-26 06:27:17.380766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.302 [2024-07-26 06:27:17.389779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.302 [2024-07-26 06:27:17.389821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.302 [2024-07-26 06:27:17.389862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.302 [2024-07-26 06:27:17.398700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.302 [2024-07-26 06:27:17.398742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.302 [2024-07-26 06:27:17.398768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.302 [2024-07-26 06:27:17.407685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.302 [2024-07-26 06:27:17.407732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.302 [2024-07-26 06:27:17.407771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.302 [2024-07-26 06:27:17.417818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.302 [2024-07-26 06:27:17.417861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.302 [2024-07-26 06:27:17.417887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.302 [2024-07-26 06:27:17.429006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.302 [2024-07-26 06:27:17.429080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.302 [2024-07-26 06:27:17.429137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.302 [2024-07-26 06:27:17.438908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.303 [2024-07-26 06:27:17.438952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.303 [2024-07-26 06:27:17.438978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.303 [2024-07-26 06:27:17.445943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.303 [2024-07-26 06:27:17.445984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.303 [2024-07-26 06:27:17.446008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.303 [2024-07-26 06:27:17.457212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.303 [2024-07-26 06:27:17.457257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.303 [2024-07-26 06:27:17.457284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.303 [2024-07-26 06:27:17.467385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.303 [2024-07-26 06:27:17.467449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.303 [2024-07-26 06:27:17.467478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.303 [2024-07-26 06:27:17.478430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.303 [2024-07-26 06:27:17.478479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.303 [2024-07-26 06:27:17.478508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.303 [2024-07-26 06:27:17.489507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.303 [2024-07-26 06:27:17.489556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.303 [2024-07-26 06:27:17.489586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.303 [2024-07-26 06:27:17.500522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.303 [2024-07-26 06:27:17.500571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.303 [2024-07-26 06:27:17.500600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.303 [2024-07-26 06:27:17.511491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.303 [2024-07-26 06:27:17.511539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.303 [2024-07-26 06:27:17.511568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.303 [2024-07-26 06:27:17.522451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.303 [2024-07-26 06:27:17.522500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.303 [2024-07-26 06:27:17.522529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.303 [2024-07-26 06:27:17.533035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.303 [2024-07-26 06:27:17.533094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.303 [2024-07-26 06:27:17.533139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.303 [2024-07-26 06:27:17.543914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.303 [2024-07-26 06:27:17.543964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.303 [2024-07-26 06:27:17.543994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.303 [2024-07-26 06:27:17.554435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.303 [2024-07-26 06:27:17.554476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.303 [2024-07-26 06:27:17.554501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.303 [2024-07-26 06:27:17.565777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:06.303 [2024-07-26 06:27:17.565820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.303 [2024-07-26 06:27:17.565845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.303 00:35:06.303 Latency(us) 00:35:06.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:06.303 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:06.303 nvme0n1 : 2.00 3280.59 410.07 0.00 0.00 4870.22 885.95 31263.10 00:35:06.303 =================================================================================================================== 00:35:06.303 Total : 3280.59 410.07 0.00 0.00 4870.22 885.95 31263.10 00:35:06.303 0 00:35:06.303 06:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:06.303 06:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:06.303 06:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:06.303 06:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:06.303 | .driver_specific 00:35:06.303 | .nvme_error 00:35:06.303 | .status_code 00:35:06.303 | .command_transient_transport_error' 00:35:06.561 06:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 211 > 0 )) 00:35:06.561 06:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 304271 00:35:06.561 06:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 304271 ']' 00:35:06.561 06:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 304271 00:35:06.561 06:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:06.561 06:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:06.561 06:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 304271 00:35:06.821 06:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:06.821 06:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:06.821 06:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 304271' 00:35:06.821 killing process with pid 304271 00:35:06.821 06:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 304271 00:35:06.821 Received shutdown signal, test time was about 2.000000 seconds 00:35:06.821 00:35:06.821 Latency(us) 00:35:06.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:06.821 =================================================================================================================== 00:35:06.821 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:06.821 06:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 304271 00:35:07.754 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:35:07.754 06:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:07.754 06:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:07.754 06:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:07.754 06:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:07.754 06:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:07.754 06:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=304937 00:35:07.755 06:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:07.755 06:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 304937 /var/tmp/bperf.sock 00:35:07.755 06:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 304937 ']' 00:35:07.755 06:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:07.755 06:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:07.755 06:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:07.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:07.755 06:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:07.755 06:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:07.755 [2024-07-26 06:27:19.040902] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:07.755 [2024-07-26 06:27:19.041033] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid304937 ] 00:35:08.012 EAL: No free 2048 kB hugepages reported on node 1 00:35:08.012 [2024-07-26 06:27:19.177090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.270 [2024-07-26 06:27:19.434549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:08.837 06:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:08.837 06:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:08.837 06:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:08.837 06:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:09.095 06:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:09.095 06:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.095 06:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:09.095 06:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.096 06:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:09.096 06:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:09.353 nvme0n1 00:35:09.353 06:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:09.353 06:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.353 06:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:09.353 06:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.353 06:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:09.353 06:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:09.611 Running I/O for 2 seconds... 00:35:09.611 [2024-07-26 06:27:20.813729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:35:09.611 [2024-07-26 06:27:20.815016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.611 [2024-07-26 06:27:20.815100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:09.611 [2024-07-26 06:27:20.828447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fac10 00:35:09.611 [2024-07-26 06:27:20.829680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.611 [2024-07-26 06:27:20.829747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:09.611 [2024-07-26 06:27:20.845414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb480 00:35:09.611 [2024-07-26 06:27:20.846818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.611 [2024-07-26 06:27:20.846877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:09.611 [2024-07-26 06:27:20.860676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef6a8 00:35:09.611 [2024-07-26 06:27:20.862243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.611 [2024-07-26 06:27:20.862283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:09.611 [2024-07-26 06:27:20.874690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:35:09.611 [2024-07-26 06:27:20.876244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.611 [2024-07-26 06:27:20.876300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:09.611 [2024-07-26 06:27:20.888300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7c50 00:35:09.611 [2024-07-26 06:27:20.889297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.611 [2024-07-26 06:27:20.889338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:09.611 [2024-07-26 06:27:20.903362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1ca0 00:35:09.611 [2024-07-26 06:27:20.904551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.611 [2024-07-26 06:27:20.904592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:09.611 [2024-07-26 06:27:20.920265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ddc00 00:35:09.611 [2024-07-26 06:27:20.922168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.612 [2024-07-26 06:27:20.922223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:09.612 [2024-07-26 06:27:20.933882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0bc0 00:35:09.612 [2024-07-26 06:27:20.935262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.612 [2024-07-26 06:27:20.935303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:09.871 [2024-07-26 06:27:20.949360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e01f8 00:35:09.871 [2024-07-26 06:27:20.950798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.871 [2024-07-26 06:27:20.950842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:09.871 [2024-07-26 06:27:20.966610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ecc78 00:35:09.871 [2024-07-26 06:27:20.968988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.871 [2024-07-26 06:27:20.969042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:09.871 [2024-07-26 06:27:20.977177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df988 00:35:09.871 [2024-07-26 06:27:20.978431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.871 [2024-07-26 06:27:20.978468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:09.871 [2024-07-26 06:27:20.992205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7da8 00:35:09.871 [2024-07-26 06:27:20.993237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.871 [2024-07-26 06:27:20.993292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:09.871 [2024-07-26 06:27:21.007538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f96f8 00:35:09.871 [2024-07-26 06:27:21.008739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.871 [2024-07-26 06:27:21.008794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:09.871 [2024-07-26 06:27:21.021619] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1ca0 00:35:09.871 [2024-07-26 06:27:21.022774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.871 [2024-07-26 06:27:21.022827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:09.871 [2024-07-26 06:27:21.038204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed920 00:35:09.871 [2024-07-26 06:27:21.039657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.871 [2024-07-26 06:27:21.039701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:09.871 [2024-07-26 06:27:21.053238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5220 00:35:09.871 [2024-07-26 06:27:21.054597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.871 [2024-07-26 06:27:21.054656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:09.872 [2024-07-26 06:27:21.068156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6b70 00:35:09.872 [2024-07-26 06:27:21.069579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.872 [2024-07-26 06:27:21.069636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:09.872 [2024-07-26 06:27:21.083014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:35:09.872 [2024-07-26 06:27:21.084629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.872 [2024-07-26 06:27:21.084680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.872 [2024-07-26 06:27:21.100101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4298 00:35:09.872 [2024-07-26 06:27:21.102349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.872 [2024-07-26 06:27:21.102404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.872 [2024-07-26 06:27:21.110412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:35:09.872 [2024-07-26 06:27:21.111353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.872 [2024-07-26 06:27:21.111408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:09.872 [2024-07-26 06:27:21.124648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3a28 00:35:09.872 [2024-07-26 06:27:21.125639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.872 [2024-07-26 06:27:21.125694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:09.872 [2024-07-26 06:27:21.141167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6b70 00:35:09.872 [2024-07-26 06:27:21.142321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.872 [2024-07-26 06:27:21.142381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:09.872 [2024-07-26 06:27:21.156279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb048 00:35:09.872 [2024-07-26 06:27:21.157631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.872 [2024-07-26 06:27:21.157686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:09.872 [2024-07-26 06:27:21.171418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e99d8 00:35:09.872 [2024-07-26 06:27:21.172783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.872 [2024-07-26 06:27:21.172842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:09.872 [2024-07-26 06:27:21.186445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb8b8 00:35:09.872 [2024-07-26 06:27:21.187812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.872 [2024-07-26 06:27:21.187871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:09.872 [2024-07-26 06:27:21.201301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df988 00:35:09.872 [2024-07-26 06:27:21.202745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.872 [2024-07-26 06:27:21.202803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:10.132 [2024-07-26 06:27:21.216696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0a68 00:35:10.132 [2024-07-26 06:27:21.218273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.132 [2024-07-26 06:27:21.218328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:10.132 [2024-07-26 06:27:21.230841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f46d0 00:35:10.132 [2024-07-26 06:27:21.232390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.132 [2024-07-26 06:27:21.232444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:10.132 [2024-07-26 06:27:21.244388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6458 00:35:10.132 [2024-07-26 06:27:21.245369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.132 [2024-07-26 06:27:21.245423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:10.132 [2024-07-26 06:27:21.259255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f20d8 00:35:10.132 [2024-07-26 06:27:21.260429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.132 [2024-07-26 06:27:21.260483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:10.132 [2024-07-26 06:27:21.276056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd640 00:35:10.132 [2024-07-26 06:27:21.278025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.132 [2024-07-26 06:27:21.278088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:10.132 [2024-07-26 06:27:21.289672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:35:10.132 [2024-07-26 06:27:21.291069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.132 [2024-07-26 06:27:21.291123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:10.132 [2024-07-26 06:27:21.304624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8e88 00:35:10.132 [2024-07-26 06:27:21.306157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.132 [2024-07-26 06:27:21.306196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.132 [2024-07-26 06:27:21.319679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0ea0 00:35:10.132 [2024-07-26 06:27:21.321359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.132 [2024-07-26 06:27:21.321410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.132 [2024-07-26 06:27:21.333527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec840 00:35:10.132 [2024-07-26 06:27:21.336077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.132 [2024-07-26 06:27:21.336118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:10.132 [2024-07-26 06:27:21.347467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6cc8 00:35:10.132 [2024-07-26 06:27:21.348535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.132 [2024-07-26 06:27:21.348590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:10.132 [2024-07-26 06:27:21.362880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7970 00:35:10.132 [2024-07-26 06:27:21.364016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.132 [2024-07-26 06:27:21.364078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:10.132 [2024-07-26 06:27:21.376786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dfdc0 00:35:10.132 [2024-07-26 06:27:21.377962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.132 [2024-07-26 06:27:21.378002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:10.132 [2024-07-26 06:27:21.393210] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7538 00:35:10.132 [2024-07-26 06:27:21.394583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.132 [2024-07-26 06:27:21.394641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:10.132 [2024-07-26 06:27:21.408561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2d80 00:35:10.132 [2024-07-26 06:27:21.410110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.132 [2024-07-26 06:27:21.410149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:10.132 [2024-07-26 06:27:21.422704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe720 00:35:10.132 [2024-07-26 06:27:21.424222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.132 [2024-07-26 06:27:21.424276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:10.132 [2024-07-26 06:27:21.436370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb328 00:35:10.132 [2024-07-26 06:27:21.437334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.132 [2024-07-26 06:27:21.437388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:10.132 [2024-07-26 06:27:21.451526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4298 00:35:10.132 [2024-07-26 06:27:21.452724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.132 [2024-07-26 06:27:21.452763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:10.393 [2024-07-26 06:27:21.468928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ff3c8 00:35:10.393 [2024-07-26 06:27:21.470910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.393 [2024-07-26 06:27:21.470976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:10.393 [2024-07-26 06:27:21.482634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eea00 00:35:10.393 [2024-07-26 06:27:21.484003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.393 [2024-07-26 06:27:21.484056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:10.393 [2024-07-26 06:27:21.497600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7da8 00:35:10.393 [2024-07-26 06:27:21.499156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.393 [2024-07-26 06:27:21.499195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.393 [2024-07-26 06:27:21.514678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dece0 00:35:10.393 [2024-07-26 06:27:21.516899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.393 [2024-07-26 06:27:21.516953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.393 [2024-07-26 06:27:21.524861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7da8 00:35:10.393 [2024-07-26 06:27:21.525800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.393 [2024-07-26 06:27:21.525852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:10.393 [2024-07-26 06:27:21.540025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eee38 00:35:10.393 [2024-07-26 06:27:21.541001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.393 [2024-07-26 06:27:21.541066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:10.393 [2024-07-26 06:27:21.554820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea680 00:35:10.393 [2024-07-26 06:27:21.555823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.393 [2024-07-26 06:27:21.555882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:10.393 [2024-07-26 06:27:21.569913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7818 00:35:10.393 [2024-07-26 06:27:21.571215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.393 [2024-07-26 06:27:21.571257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:10.393 [2024-07-26 06:27:21.583926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5be8 00:35:10.393 [2024-07-26 06:27:21.585031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.393 [2024-07-26 06:27:21.585094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:10.393 [2024-07-26 06:27:21.600290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5378 00:35:10.393 [2024-07-26 06:27:21.601692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.393 [2024-07-26 06:27:21.601737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:10.393 [2024-07-26 06:27:21.615484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0350 00:35:10.393 [2024-07-26 06:27:21.616996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.393 [2024-07-26 06:27:21.617049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:10.393 [2024-07-26 06:27:21.629412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4140 00:35:10.393 [2024-07-26 06:27:21.630904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.393 [2024-07-26 06:27:21.630958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:10.393 [2024-07-26 06:27:21.642917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb760 00:35:10.393 [2024-07-26 06:27:21.643894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.393 [2024-07-26 06:27:21.643949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:10.393 [2024-07-26 06:27:21.658004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8e88 00:35:10.393 [2024-07-26 06:27:21.659169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.393 [2024-07-26 06:27:21.659208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:10.393 [2024-07-26 06:27:21.674992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1710 00:35:10.393 [2024-07-26 06:27:21.676945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.393 [2024-07-26 06:27:21.676999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:10.393 [2024-07-26 06:27:21.688674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6cc8 00:35:10.393 [2024-07-26 06:27:21.690000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.393 [2024-07-26 06:27:21.690054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:10.393 [2024-07-26 06:27:21.703653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9b30 00:35:10.393 [2024-07-26 06:27:21.705041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.393 [2024-07-26 06:27:21.705106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.393 [2024-07-26 06:27:21.720577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed920 00:35:10.393 [2024-07-26 06:27:21.722928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.393 [2024-07-26 06:27:21.722988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.653 [2024-07-26 06:27:21.731099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9b30 00:35:10.653 [2024-07-26 06:27:21.732048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.653 [2024-07-26 06:27:21.732109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:10.653 [2024-07-26 06:27:21.745189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:35:10.653 [2024-07-26 06:27:21.746089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.653 [2024-07-26 06:27:21.746143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:10.653 [2024-07-26 06:27:21.761446] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed4e8 00:35:10.653 [2024-07-26 06:27:21.762627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.653 [2024-07-26 06:27:21.762686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:10.653 [2024-07-26 06:27:21.777405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df550 00:35:10.653 [2024-07-26 06:27:21.778875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.653 [2024-07-26 06:27:21.778928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:10.653 [2024-07-26 06:27:21.792629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0350 00:35:10.653 [2024-07-26 06:27:21.794072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.653 [2024-07-26 06:27:21.794126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:10.653 [2024-07-26 06:27:21.810287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eee38 00:35:10.653 [2024-07-26 06:27:21.812018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.653 [2024-07-26 06:27:21.812087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:10.653 [2024-07-26 06:27:21.826880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f46d0 00:35:10.653 [2024-07-26 06:27:21.828857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.653 [2024-07-26 06:27:21.828920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:10.653 [2024-07-26 06:27:21.842119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6458 00:35:10.653 [2024-07-26 06:27:21.844010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.653 [2024-07-26 06:27:21.844054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:10.653 [2024-07-26 06:27:21.856861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f92c0 00:35:10.653 [2024-07-26 06:27:21.858148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.653 [2024-07-26 06:27:21.858187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:10.653 [2024-07-26 06:27:21.872818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5658 00:35:10.653 [2024-07-26 06:27:21.874232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.653 [2024-07-26 06:27:21.874272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:10.653 [2024-07-26 06:27:21.888892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd640 00:35:10.653 [2024-07-26 06:27:21.890386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.653 [2024-07-26 06:27:21.890438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:10.653 [2024-07-26 06:27:21.906511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7970 00:35:10.653 [2024-07-26 06:27:21.908790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.653 [2024-07-26 06:27:21.908843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:10.653 [2024-07-26 06:27:21.920973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3e60 00:35:10.653 [2024-07-26 06:27:21.922652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.653 [2024-07-26 06:27:21.922705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.653 [2024-07-26 06:27:21.938480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaab8 00:35:10.653 [2024-07-26 06:27:21.940960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.653 [2024-07-26 06:27:21.941014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.653 [2024-07-26 06:27:21.949724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6fa8 00:35:10.653 [2024-07-26 06:27:21.950803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.653 [2024-07-26 06:27:21.950857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:10.653 [2024-07-26 06:27:21.964947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2510 00:35:10.653 [2024-07-26 06:27:21.966004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.653 [2024-07-26 06:27:21.966056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:10.654 [2024-07-26 06:27:21.982925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e23b8 00:35:10.654 [2024-07-26 06:27:21.984283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.654 [2024-07-26 06:27:21.984332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:10.914 [2024-07-26 06:27:21.998916] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4b08 00:35:10.914 [2024-07-26 06:27:22.000203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.914 [2024-07-26 06:27:22.000241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:10.914 [2024-07-26 06:27:22.015197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8e88 00:35:10.914 [2024-07-26 06:27:22.016570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.914 [2024-07-26 06:27:22.016613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:10.914 [2024-07-26 06:27:22.033339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8d30 00:35:10.914 [2024-07-26 06:27:22.035586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.914 [2024-07-26 06:27:22.035639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:10.914 [2024-07-26 06:27:22.047906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dfdc0 00:35:10.914 [2024-07-26 06:27:22.049530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.914 [2024-07-26 06:27:22.049582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:10.914 [2024-07-26 06:27:22.062107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb480 00:35:10.914 [2024-07-26 06:27:22.064820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.914 [2024-07-26 06:27:22.064863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:10.914 [2024-07-26 06:27:22.076632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ddc00 00:35:10.914 [2024-07-26 06:27:22.077680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.914 [2024-07-26 06:27:22.077736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.914 [2024-07-26 06:27:22.093159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7c50 00:35:10.914 [2024-07-26 06:27:22.094326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.914 [2024-07-26 06:27:22.094369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:10.914 [2024-07-26 06:27:22.107580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df988 00:35:10.914 [2024-07-26 06:27:22.108762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.914 [2024-07-26 06:27:22.108799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:10.914 [2024-07-26 06:27:22.125349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8d30 00:35:10.914 [2024-07-26 06:27:22.126814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.914 [2024-07-26 06:27:22.126878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.914 [2024-07-26 06:27:22.141487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5be8 00:35:10.914 [2024-07-26 06:27:22.143098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.914 [2024-07-26 06:27:22.143149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.914 [2024-07-26 06:27:22.156210] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5ec8 00:35:10.914 [2024-07-26 06:27:22.157793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.914 [2024-07-26 06:27:22.157829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:10.914 [2024-07-26 06:27:22.170695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fdeb0 00:35:10.914 [2024-07-26 06:27:22.171612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.914 [2024-07-26 06:27:22.171648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:10.914 [2024-07-26 06:27:22.186391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4de8 00:35:10.914 [2024-07-26 06:27:22.187584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.914 [2024-07-26 06:27:22.187627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:10.914 [2024-07-26 06:27:22.204273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5658 00:35:10.914 [2024-07-26 06:27:22.206281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.914 [2024-07-26 06:27:22.206333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:10.914 [2024-07-26 06:27:22.218674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5378 00:35:10.914 [2024-07-26 06:27:22.220127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.914 [2024-07-26 06:27:22.220180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:10.914 [2024-07-26 06:27:22.234538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0788 00:35:10.914 [2024-07-26 06:27:22.236171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.914 [2024-07-26 06:27:22.236210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:11.174 [2024-07-26 06:27:22.252717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e27f0 00:35:11.174 [2024-07-26 06:27:22.255175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.174 [2024-07-26 06:27:22.255228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:11.174 [2024-07-26 06:27:22.263784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f20d8 00:35:11.174 [2024-07-26 06:27:22.264757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.174 [2024-07-26 06:27:22.264809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:11.174 [2024-07-26 06:27:22.278405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7970 00:35:11.174 [2024-07-26 06:27:22.279325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.174 [2024-07-26 06:27:22.279377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:11.174 [2024-07-26 06:27:22.295709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6fa8 00:35:11.174 [2024-07-26 06:27:22.296989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.174 [2024-07-26 06:27:22.297046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:11.174 [2024-07-26 06:27:22.311747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9168 00:35:11.174 [2024-07-26 06:27:22.313148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.174 [2024-07-26 06:27:22.313199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:11.174 [2024-07-26 06:27:22.326608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e84c0 00:35:11.174 [2024-07-26 06:27:22.327977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.174 [2024-07-26 06:27:22.328030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:11.174 [2024-07-26 06:27:22.344310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:35:11.174 [2024-07-26 06:27:22.345975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.174 [2024-07-26 06:27:22.346033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:11.175 [2024-07-26 06:27:22.360382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9e10 00:35:11.175 [2024-07-26 06:27:22.362215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.175 [2024-07-26 06:27:22.362267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:11.175 [2024-07-26 06:27:22.375221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de038 00:35:11.175 [2024-07-26 06:27:22.376995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.175 [2024-07-26 06:27:22.377048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:11.175 [2024-07-26 06:27:22.389769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f20d8 00:35:11.175 [2024-07-26 06:27:22.390999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.175 [2024-07-26 06:27:22.391057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:11.175 [2024-07-26 06:27:22.405596] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea248 00:35:11.175 [2024-07-26 06:27:22.406990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.175 [2024-07-26 06:27:22.407033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:11.175 [2024-07-26 06:27:22.423521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7100 00:35:11.175 [2024-07-26 06:27:22.425773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.175 [2024-07-26 06:27:22.425825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:11.175 [2024-07-26 06:27:22.438070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f57b0 00:35:11.175 [2024-07-26 06:27:22.439716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.175 [2024-07-26 06:27:22.439772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:11.175 [2024-07-26 06:27:22.452097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fcdd0 00:35:11.175 [2024-07-26 06:27:22.454850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.175 [2024-07-26 06:27:22.454894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:11.175 [2024-07-26 06:27:22.466563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ff3c8 00:35:11.175 [2024-07-26 06:27:22.467536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.175 [2024-07-26 06:27:22.467591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.175 [2024-07-26 06:27:22.482625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0630 00:35:11.175 [2024-07-26 06:27:22.483835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.175 [2024-07-26 06:27:22.483887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.175 [2024-07-26 06:27:22.497476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaab8 00:35:11.175 [2024-07-26 06:27:22.498650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.175 [2024-07-26 06:27:22.498686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:11.453 [2024-07-26 06:27:22.514996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de8a8 00:35:11.453 [2024-07-26 06:27:22.516518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.453 [2024-07-26 06:27:22.516561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.453 [2024-07-26 06:27:22.531451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4b08 00:35:11.453 [2024-07-26 06:27:22.533140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.453 [2024-07-26 06:27:22.533194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.453 [2024-07-26 06:27:22.546459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1710 00:35:11.453 [2024-07-26 06:27:22.548079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.453 [2024-07-26 06:27:22.548131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:11.453 [2024-07-26 06:27:22.561152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:35:11.453 [2024-07-26 06:27:22.562170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.453 [2024-07-26 06:27:22.562207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:11.453 [2024-07-26 06:27:22.577206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4de8 00:35:11.453 [2024-07-26 06:27:22.578372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.453 [2024-07-26 06:27:22.578431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:11.453 [2024-07-26 06:27:22.595582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ff3c8 00:35:11.453 [2024-07-26 06:27:22.597619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.453 [2024-07-26 06:27:22.597659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:11.453 [2024-07-26 06:27:22.610274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eea00 00:35:11.453 [2024-07-26 06:27:22.611692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.453 [2024-07-26 06:27:22.611746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:11.453 [2024-07-26 06:27:22.626152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:35:11.453 [2024-07-26 06:27:22.627682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.453 [2024-07-26 06:27:22.627726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:11.453 [2024-07-26 06:27:22.644123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e73e0 00:35:11.453 [2024-07-26 06:27:22.646599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.453 [2024-07-26 06:27:22.646636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:11.453 [2024-07-26 06:27:22.655226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195edd58 00:35:11.453 [2024-07-26 06:27:22.656275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.453 [2024-07-26 06:27:22.656312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:11.453 [2024-07-26 06:27:22.670076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9f68 00:35:11.453 [2024-07-26 06:27:22.671041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.453 [2024-07-26 06:27:22.671100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:11.453 [2024-07-26 06:27:22.687543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6458 00:35:11.453 [2024-07-26 06:27:22.688792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.453 [2024-07-26 06:27:22.688848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:11.453 [2024-07-26 06:27:22.702186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dece0 00:35:11.453 [2024-07-26 06:27:22.703343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.453 [2024-07-26 06:27:22.703395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:11.453 [2024-07-26 06:27:22.719552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd640 00:35:11.453 [2024-07-26 06:27:22.721016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.453 [2024-07-26 06:27:22.721080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.453 [2024-07-26 06:27:22.735719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0a68 00:35:11.453 [2024-07-26 06:27:22.737359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.453 [2024-07-26 06:27:22.737412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.453 [2024-07-26 06:27:22.750439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb760 00:35:11.453 [2024-07-26 06:27:22.752040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.453 [2024-07-26 06:27:22.752099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:11.453 [2024-07-26 06:27:22.764884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc998 00:35:11.453 [2024-07-26 06:27:22.765907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.453 [2024-07-26 06:27:22.765944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:11.720 [2024-07-26 06:27:22.780792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4de8 00:35:11.720 [2024-07-26 06:27:22.782026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.720 [2024-07-26 06:27:22.782077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:11.720 [2024-07-26 06:27:22.798694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5658 00:35:11.720 [2024-07-26 06:27:22.800848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.720 [2024-07-26 06:27:22.800891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:11.720 00:35:11.720 Latency(us) 00:35:11.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.720 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:11.720 nvme0n1 : 2.01 16679.82 65.16 0.00 0.00 7663.62 3373.89 18738.44 00:35:11.720 =================================================================================================================== 00:35:11.720 Total : 16679.82 65.16 0.00 0.00 7663.62 3373.89 18738.44 00:35:11.720 0 00:35:11.720 06:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:11.720 06:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:11.720 06:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:11.720 | .driver_specific 00:35:11.720 | .nvme_error 00:35:11.720 | .status_code 00:35:11.720 | .command_transient_transport_error' 00:35:11.720 06:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:11.979 06:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 131 > 0 )) 00:35:11.979 06:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 304937 00:35:11.979 06:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 304937 ']' 00:35:11.979 06:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 304937 00:35:11.979 06:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:11.979 06:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:11.979 06:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 304937 00:35:11.979 06:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:11.979 06:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:11.979 06:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 304937' 00:35:11.979 killing process with pid 304937 00:35:11.979 06:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 304937 00:35:11.979 Received shutdown signal, test time was about 2.000000 seconds 00:35:11.979 00:35:11.979 Latency(us) 00:35:11.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.979 =================================================================================================================== 00:35:11.979 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:11.979 06:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 304937 00:35:12.916 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:35:12.916 06:27:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:12.916 06:27:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:12.916 06:27:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:12.916 06:27:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:12.916 06:27:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:12.916 06:27:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=305503 00:35:12.916 06:27:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:12.916 06:27:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 305503 /var/tmp/bperf.sock 00:35:12.916 06:27:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 305503 ']' 00:35:12.916 06:27:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:12.916 06:27:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:12.916 06:27:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:12.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:12.916 06:27:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:12.916 06:27:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:12.916 [2024-07-26 06:27:24.172445] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:12.916 [2024-07-26 06:27:24.172607] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid305503 ] 00:35:12.916 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:12.916 Zero copy mechanism will not be used. 00:35:12.916 EAL: No free 2048 kB hugepages reported on node 1 00:35:13.174 [2024-07-26 06:27:24.304503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:13.433 [2024-07-26 06:27:24.561696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:13.999 06:27:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:13.999 06:27:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:13.999 06:27:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:14.000 06:27:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:14.258 06:27:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:14.258 06:27:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.258 06:27:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:14.258 06:27:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.258 06:27:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:14.258 06:27:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:14.823 nvme0n1 00:35:14.823 06:27:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:14.824 06:27:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.824 06:27:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:14.824 06:27:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.824 06:27:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:14.824 06:27:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:14.824 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:14.824 Zero copy mechanism will not be used. 00:35:14.824 Running I/O for 2 seconds... 00:35:14.824 [2024-07-26 06:27:26.022745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:14.824 [2024-07-26 06:27:26.023239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.824 [2024-07-26 06:27:26.023292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.824 [2024-07-26 06:27:26.033456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:14.824 [2024-07-26 06:27:26.033948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.824 [2024-07-26 06:27:26.033994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.824 [2024-07-26 06:27:26.045825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:14.824 [2024-07-26 06:27:26.046289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.824 [2024-07-26 06:27:26.046330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.824 [2024-07-26 06:27:26.056414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:14.824 [2024-07-26 06:27:26.056791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.824 [2024-07-26 06:27:26.056829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.824 [2024-07-26 06:27:26.068615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:14.824 [2024-07-26 06:27:26.069048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.824 [2024-07-26 06:27:26.069111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.824 [2024-07-26 06:27:26.078493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:14.824 [2024-07-26 06:27:26.078923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.824 [2024-07-26 06:27:26.078975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.824 [2024-07-26 06:27:26.089123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:14.824 [2024-07-26 06:27:26.089623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.824 [2024-07-26 06:27:26.089659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.824 [2024-07-26 06:27:26.101038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:14.824 [2024-07-26 06:27:26.101454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.824 [2024-07-26 06:27:26.101506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.824 [2024-07-26 06:27:26.111414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:14.824 [2024-07-26 06:27:26.111851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.824 [2024-07-26 06:27:26.111909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.824 [2024-07-26 06:27:26.120833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:14.824 [2024-07-26 06:27:26.121344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.824 [2024-07-26 06:27:26.121402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.824 [2024-07-26 06:27:26.129896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:14.824 [2024-07-26 06:27:26.130298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.824 [2024-07-26 06:27:26.130336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.824 [2024-07-26 06:27:26.138168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:14.824 [2024-07-26 06:27:26.138582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.824 [2024-07-26 06:27:26.138621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.824 [2024-07-26 06:27:26.146466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:14.824 [2024-07-26 06:27:26.146859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.824 [2024-07-26 06:27:26.146900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.824 [2024-07-26 06:27:26.155448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:14.824 [2024-07-26 06:27:26.155862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.824 [2024-07-26 06:27:26.155912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.083 [2024-07-26 06:27:26.163855] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.083 [2024-07-26 06:27:26.164295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.083 [2024-07-26 06:27:26.164333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.083 [2024-07-26 06:27:26.173573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.083 [2024-07-26 06:27:26.173959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.083 [2024-07-26 06:27:26.173997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.083 [2024-07-26 06:27:26.183204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.083 [2024-07-26 06:27:26.183642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.083 [2024-07-26 06:27:26.183699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.083 [2024-07-26 06:27:26.192879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.083 [2024-07-26 06:27:26.193297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.083 [2024-07-26 06:27:26.193334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.083 [2024-07-26 06:27:26.202715] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.083 [2024-07-26 06:27:26.203133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.083 [2024-07-26 06:27:26.203173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.083 [2024-07-26 06:27:26.212390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.083 [2024-07-26 06:27:26.212768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.083 [2024-07-26 06:27:26.212820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.083 [2024-07-26 06:27:26.222450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.083 [2024-07-26 06:27:26.222886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.083 [2024-07-26 06:27:26.222936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.083 [2024-07-26 06:27:26.232433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.083 [2024-07-26 06:27:26.232789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.083 [2024-07-26 06:27:26.232826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.083 [2024-07-26 06:27:26.242035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.083 [2024-07-26 06:27:26.242482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.083 [2024-07-26 06:27:26.242526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.083 [2024-07-26 06:27:26.251699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.083 [2024-07-26 06:27:26.252122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.083 [2024-07-26 06:27:26.252162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.083 [2024-07-26 06:27:26.262030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.083 [2024-07-26 06:27:26.262517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.083 [2024-07-26 06:27:26.262562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.083 [2024-07-26 06:27:26.274092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.084 [2024-07-26 06:27:26.274582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.084 [2024-07-26 06:27:26.274627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.084 [2024-07-26 06:27:26.283608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.084 [2024-07-26 06:27:26.284012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.084 [2024-07-26 06:27:26.284072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.084 [2024-07-26 06:27:26.293845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.084 [2024-07-26 06:27:26.294240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.084 [2024-07-26 06:27:26.294279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.084 [2024-07-26 06:27:26.304295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.084 [2024-07-26 06:27:26.304725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.084 [2024-07-26 06:27:26.304763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.084 [2024-07-26 06:27:26.316116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.084 [2024-07-26 06:27:26.316499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.084 [2024-07-26 06:27:26.316536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.084 [2024-07-26 06:27:26.326495] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.084 [2024-07-26 06:27:26.326936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.084 [2024-07-26 06:27:26.326989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.084 [2024-07-26 06:27:26.336432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.084 [2024-07-26 06:27:26.336826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.084 [2024-07-26 06:27:26.336863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.084 [2024-07-26 06:27:26.345801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.084 [2024-07-26 06:27:26.346201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.084 [2024-07-26 06:27:26.346240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.084 [2024-07-26 06:27:26.356156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.084 [2024-07-26 06:27:26.356593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.084 [2024-07-26 06:27:26.356644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.084 [2024-07-26 06:27:26.366221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.084 [2024-07-26 06:27:26.366625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.084 [2024-07-26 06:27:26.366661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.084 [2024-07-26 06:27:26.375712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.084 [2024-07-26 06:27:26.376119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.084 [2024-07-26 06:27:26.376157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.084 [2024-07-26 06:27:26.385202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.084 [2024-07-26 06:27:26.385587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.084 [2024-07-26 06:27:26.385644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.084 [2024-07-26 06:27:26.396033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.084 [2024-07-26 06:27:26.396538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.084 [2024-07-26 06:27:26.396582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.084 [2024-07-26 06:27:26.406576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.084 [2024-07-26 06:27:26.407040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.084 [2024-07-26 06:27:26.407120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.344 [2024-07-26 06:27:26.417930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.344 [2024-07-26 06:27:26.418360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.344 [2024-07-26 06:27:26.418405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.344 [2024-07-26 06:27:26.427527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.344 [2024-07-26 06:27:26.427991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.344 [2024-07-26 06:27:26.428043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.344 [2024-07-26 06:27:26.436374] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.344 [2024-07-26 06:27:26.436759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.344 [2024-07-26 06:27:26.436824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.344 [2024-07-26 06:27:26.445247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.344 [2024-07-26 06:27:26.445725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.344 [2024-07-26 06:27:26.445776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.344 [2024-07-26 06:27:26.454493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.344 [2024-07-26 06:27:26.454895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.344 [2024-07-26 06:27:26.454932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.344 [2024-07-26 06:27:26.463140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.344 [2024-07-26 06:27:26.463588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.344 [2024-07-26 06:27:26.463641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.344 [2024-07-26 06:27:26.472764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.344 [2024-07-26 06:27:26.473167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.344 [2024-07-26 06:27:26.473206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.344 [2024-07-26 06:27:26.481647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.344 [2024-07-26 06:27:26.482111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.344 [2024-07-26 06:27:26.482149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.344 [2024-07-26 06:27:26.491129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.344 [2024-07-26 06:27:26.491520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.344 [2024-07-26 06:27:26.491557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.344 [2024-07-26 06:27:26.501833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.344 [2024-07-26 06:27:26.502251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.344 [2024-07-26 06:27:26.502289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.344 [2024-07-26 06:27:26.513356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.344 [2024-07-26 06:27:26.513745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.345 [2024-07-26 06:27:26.513783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.345 [2024-07-26 06:27:26.523552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.345 [2024-07-26 06:27:26.523987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.345 [2024-07-26 06:27:26.524046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.345 [2024-07-26 06:27:26.533798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.345 [2024-07-26 06:27:26.534254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.345 [2024-07-26 06:27:26.534309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.345 [2024-07-26 06:27:26.545005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.345 [2024-07-26 06:27:26.545456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.345 [2024-07-26 06:27:26.545493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.345 [2024-07-26 06:27:26.556036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.345 [2024-07-26 06:27:26.556515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.345 [2024-07-26 06:27:26.556559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.345 [2024-07-26 06:27:26.566457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.345 [2024-07-26 06:27:26.566923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.345 [2024-07-26 06:27:26.566960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.345 [2024-07-26 06:27:26.575807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.345 [2024-07-26 06:27:26.576294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.345 [2024-07-26 06:27:26.576347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.345 [2024-07-26 06:27:26.585204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.345 [2024-07-26 06:27:26.585628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.345 [2024-07-26 06:27:26.585665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.345 [2024-07-26 06:27:26.593798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.345 [2024-07-26 06:27:26.594222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.345 [2024-07-26 06:27:26.594266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.345 [2024-07-26 06:27:26.602440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.345 [2024-07-26 06:27:26.602900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.345 [2024-07-26 06:27:26.602944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.345 [2024-07-26 06:27:26.611605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.345 [2024-07-26 06:27:26.612037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.345 [2024-07-26 06:27:26.612097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.345 [2024-07-26 06:27:26.620709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.345 [2024-07-26 06:27:26.620867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.345 [2024-07-26 06:27:26.620911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.345 [2024-07-26 06:27:26.630263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.345 [2024-07-26 06:27:26.630654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.345 [2024-07-26 06:27:26.630710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.345 [2024-07-26 06:27:26.639275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.345 [2024-07-26 06:27:26.639677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.345 [2024-07-26 06:27:26.639728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.345 [2024-07-26 06:27:26.648994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.345 [2024-07-26 06:27:26.649415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.345 [2024-07-26 06:27:26.649465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.345 [2024-07-26 06:27:26.658877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.345 [2024-07-26 06:27:26.659356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.345 [2024-07-26 06:27:26.659394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.345 [2024-07-26 06:27:26.668107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.345 [2024-07-26 06:27:26.668529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.345 [2024-07-26 06:27:26.668566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.345 [2024-07-26 06:27:26.676670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.345 [2024-07-26 06:27:26.677076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.345 [2024-07-26 06:27:26.677131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.606 [2024-07-26 06:27:26.685412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.606 [2024-07-26 06:27:26.685848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.606 [2024-07-26 06:27:26.685899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.606 [2024-07-26 06:27:26.693951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.606 [2024-07-26 06:27:26.694451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.606 [2024-07-26 06:27:26.694489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.606 [2024-07-26 06:27:26.702179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.606 [2024-07-26 06:27:26.702706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.606 [2024-07-26 06:27:26.702758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.606 [2024-07-26 06:27:26.711340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.606 [2024-07-26 06:27:26.711741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.606 [2024-07-26 06:27:26.711794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.606 [2024-07-26 06:27:26.719975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.606 [2024-07-26 06:27:26.720460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.606 [2024-07-26 06:27:26.720497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.606 [2024-07-26 06:27:26.729284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.606 [2024-07-26 06:27:26.729754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.606 [2024-07-26 06:27:26.729807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.606 [2024-07-26 06:27:26.738744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.606 [2024-07-26 06:27:26.739217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.606 [2024-07-26 06:27:26.739268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.606 [2024-07-26 06:27:26.747374] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.606 [2024-07-26 06:27:26.747765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.606 [2024-07-26 06:27:26.747802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.606 [2024-07-26 06:27:26.756095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.606 [2024-07-26 06:27:26.756520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.606 [2024-07-26 06:27:26.756572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.606 [2024-07-26 06:27:26.764624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.607 [2024-07-26 06:27:26.765029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.607 [2024-07-26 06:27:26.765088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.607 [2024-07-26 06:27:26.772947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.607 [2024-07-26 06:27:26.773091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.607 [2024-07-26 06:27:26.773130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.607 [2024-07-26 06:27:26.782184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.607 [2024-07-26 06:27:26.782746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.607 [2024-07-26 06:27:26.782790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.607 [2024-07-26 06:27:26.791470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.607 [2024-07-26 06:27:26.791938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.607 [2024-07-26 06:27:26.791990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.607 [2024-07-26 06:27:26.800401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.607 [2024-07-26 06:27:26.800831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.607 [2024-07-26 06:27:26.800883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.607 [2024-07-26 06:27:26.809885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.607 [2024-07-26 06:27:26.810409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.607 [2024-07-26 06:27:26.810464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.607 [2024-07-26 06:27:26.818294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.607 [2024-07-26 06:27:26.818447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.607 [2024-07-26 06:27:26.818485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.607 [2024-07-26 06:27:26.827642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.607 [2024-07-26 06:27:26.828049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.607 [2024-07-26 06:27:26.828109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.607 [2024-07-26 06:27:26.836452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.607 [2024-07-26 06:27:26.836913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.607 [2024-07-26 06:27:26.836949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.607 [2024-07-26 06:27:26.845877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.607 [2024-07-26 06:27:26.846314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.607 [2024-07-26 06:27:26.846352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.607 [2024-07-26 06:27:26.854713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.607 [2024-07-26 06:27:26.855183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.607 [2024-07-26 06:27:26.855235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.607 [2024-07-26 06:27:26.863756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.607 [2024-07-26 06:27:26.864261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.607 [2024-07-26 06:27:26.864313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.607 [2024-07-26 06:27:26.873119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.607 [2024-07-26 06:27:26.873544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.607 [2024-07-26 06:27:26.873579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.607 [2024-07-26 06:27:26.882357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.607 [2024-07-26 06:27:26.882805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.607 [2024-07-26 06:27:26.882840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.607 [2024-07-26 06:27:26.891290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.607 [2024-07-26 06:27:26.891684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.607 [2024-07-26 06:27:26.891722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.607 [2024-07-26 06:27:26.900168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.607 [2024-07-26 06:27:26.900629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.607 [2024-07-26 06:27:26.900665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.607 [2024-07-26 06:27:26.909271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.607 [2024-07-26 06:27:26.909722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.607 [2024-07-26 06:27:26.909760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.607 [2024-07-26 06:27:26.918626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.607 [2024-07-26 06:27:26.919072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.607 [2024-07-26 06:27:26.919111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.607 [2024-07-26 06:27:26.927531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.607 [2024-07-26 06:27:26.928002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.607 [2024-07-26 06:27:26.928055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.607 [2024-07-26 06:27:26.936616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.607 [2024-07-26 06:27:26.937055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.607 [2024-07-26 06:27:26.937117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.868 [2024-07-26 06:27:26.945515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.868 [2024-07-26 06:27:26.945990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.868 [2024-07-26 06:27:26.946049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.868 [2024-07-26 06:27:26.956473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.868 [2024-07-26 06:27:26.956933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.868 [2024-07-26 06:27:26.956987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.868 [2024-07-26 06:27:26.967544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.868 [2024-07-26 06:27:26.967950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.868 [2024-07-26 06:27:26.968003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.868 [2024-07-26 06:27:26.979121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.868 [2024-07-26 06:27:26.979564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.868 [2024-07-26 06:27:26.979609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.868 [2024-07-26 06:27:26.989928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.868 [2024-07-26 06:27:26.990331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.868 [2024-07-26 06:27:26.990385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.868 [2024-07-26 06:27:27.000720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.868 [2024-07-26 06:27:27.001226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.868 [2024-07-26 06:27:27.001275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.868 [2024-07-26 06:27:27.012317] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.868 [2024-07-26 06:27:27.012766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.868 [2024-07-26 06:27:27.012818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.868 [2024-07-26 06:27:27.021648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.868 [2024-07-26 06:27:27.022086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.868 [2024-07-26 06:27:27.022148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.868 [2024-07-26 06:27:27.030252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.868 [2024-07-26 06:27:27.030705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.868 [2024-07-26 06:27:27.030743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.868 [2024-07-26 06:27:27.039049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.868 [2024-07-26 06:27:27.039487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.868 [2024-07-26 06:27:27.039545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.868 [2024-07-26 06:27:27.047490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.868 [2024-07-26 06:27:27.047925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.868 [2024-07-26 06:27:27.047980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.868 [2024-07-26 06:27:27.056176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.868 [2024-07-26 06:27:27.056626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.868 [2024-07-26 06:27:27.056678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.868 [2024-07-26 06:27:27.064501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.868 [2024-07-26 06:27:27.064909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.868 [2024-07-26 06:27:27.064961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.868 [2024-07-26 06:27:27.072510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.868 [2024-07-26 06:27:27.072903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.868 [2024-07-26 06:27:27.072955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.868 [2024-07-26 06:27:27.080588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.868 [2024-07-26 06:27:27.080960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.868 [2024-07-26 06:27:27.081015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.868 [2024-07-26 06:27:27.088950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.868 [2024-07-26 06:27:27.089374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.868 [2024-07-26 06:27:27.089413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.868 [2024-07-26 06:27:27.097025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.868 [2024-07-26 06:27:27.097418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.868 [2024-07-26 06:27:27.097472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.868 [2024-07-26 06:27:27.105348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.868 [2024-07-26 06:27:27.105806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.868 [2024-07-26 06:27:27.105842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.868 [2024-07-26 06:27:27.114738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.868 [2024-07-26 06:27:27.115221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.868 [2024-07-26 06:27:27.115274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.868 [2024-07-26 06:27:27.124078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.868 [2024-07-26 06:27:27.124476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.868 [2024-07-26 06:27:27.124528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.868 [2024-07-26 06:27:27.133604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.868 [2024-07-26 06:27:27.133967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.868 [2024-07-26 06:27:27.134021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.869 [2024-07-26 06:27:27.143004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.869 [2024-07-26 06:27:27.143468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.869 [2024-07-26 06:27:27.143521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.869 [2024-07-26 06:27:27.152621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.869 [2024-07-26 06:27:27.153119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.869 [2024-07-26 06:27:27.153182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.869 [2024-07-26 06:27:27.161330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.869 [2024-07-26 06:27:27.161732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.869 [2024-07-26 06:27:27.161788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.869 [2024-07-26 06:27:27.170493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.869 [2024-07-26 06:27:27.170931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.869 [2024-07-26 06:27:27.170967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.869 [2024-07-26 06:27:27.179210] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.869 [2024-07-26 06:27:27.179638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.869 [2024-07-26 06:27:27.179674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.869 [2024-07-26 06:27:27.187892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.869 [2024-07-26 06:27:27.188283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.869 [2024-07-26 06:27:27.188336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.869 [2024-07-26 06:27:27.197155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:15.869 [2024-07-26 06:27:27.197530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.869 [2024-07-26 06:27:27.197582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.130 [2024-07-26 06:27:27.206342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.130 [2024-07-26 06:27:27.206827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.130 [2024-07-26 06:27:27.206864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.130 [2024-07-26 06:27:27.215945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.130 [2024-07-26 06:27:27.216431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.130 [2024-07-26 06:27:27.216469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.130 [2024-07-26 06:27:27.224774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.130 [2024-07-26 06:27:27.225194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.130 [2024-07-26 06:27:27.225247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.130 [2024-07-26 06:27:27.233981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.130 [2024-07-26 06:27:27.234413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.130 [2024-07-26 06:27:27.234449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.130 [2024-07-26 06:27:27.242817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.130 [2024-07-26 06:27:27.243267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.130 [2024-07-26 06:27:27.243312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.130 [2024-07-26 06:27:27.251605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.130 [2024-07-26 06:27:27.252149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.130 [2024-07-26 06:27:27.252203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.130 [2024-07-26 06:27:27.261037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.130 [2024-07-26 06:27:27.261468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.130 [2024-07-26 06:27:27.261506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.130 [2024-07-26 06:27:27.269376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.130 [2024-07-26 06:27:27.269742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.130 [2024-07-26 06:27:27.269779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.130 [2024-07-26 06:27:27.277274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.130 [2024-07-26 06:27:27.277648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.130 [2024-07-26 06:27:27.277702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.130 [2024-07-26 06:27:27.286263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.130 [2024-07-26 06:27:27.286775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.130 [2024-07-26 06:27:27.286824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.130 [2024-07-26 06:27:27.295502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.130 [2024-07-26 06:27:27.295915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.130 [2024-07-26 06:27:27.295966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.130 [2024-07-26 06:27:27.304941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.130 [2024-07-26 06:27:27.305464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.130 [2024-07-26 06:27:27.305524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.130 [2024-07-26 06:27:27.314418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.130 [2024-07-26 06:27:27.314910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.130 [2024-07-26 06:27:27.314948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.130 [2024-07-26 06:27:27.322707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.130 [2024-07-26 06:27:27.323086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.130 [2024-07-26 06:27:27.323138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.130 [2024-07-26 06:27:27.331792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.130 [2024-07-26 06:27:27.332293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.130 [2024-07-26 06:27:27.332332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.130 [2024-07-26 06:27:27.340602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.130 [2024-07-26 06:27:27.341042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.130 [2024-07-26 06:27:27.341101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.130 [2024-07-26 06:27:27.350032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.130 [2024-07-26 06:27:27.350444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.130 [2024-07-26 06:27:27.350496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.130 [2024-07-26 06:27:27.359411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.130 [2024-07-26 06:27:27.359849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.131 [2024-07-26 06:27:27.359884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.131 [2024-07-26 06:27:27.368870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.131 [2024-07-26 06:27:27.369404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.131 [2024-07-26 06:27:27.369441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.131 [2024-07-26 06:27:27.378484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.131 [2024-07-26 06:27:27.378968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.131 [2024-07-26 06:27:27.379021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.131 [2024-07-26 06:27:27.388360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.131 [2024-07-26 06:27:27.388735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.131 [2024-07-26 06:27:27.388789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.131 [2024-07-26 06:27:27.397928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.131 [2024-07-26 06:27:27.398375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.131 [2024-07-26 06:27:27.398429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.131 [2024-07-26 06:27:27.407478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.131 [2024-07-26 06:27:27.407954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.131 [2024-07-26 06:27:27.407994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.131 [2024-07-26 06:27:27.416508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.131 [2024-07-26 06:27:27.416857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.131 [2024-07-26 06:27:27.416901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.131 [2024-07-26 06:27:27.424890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.131 [2024-07-26 06:27:27.425251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.131 [2024-07-26 06:27:27.425292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.131 [2024-07-26 06:27:27.433079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.131 [2024-07-26 06:27:27.433431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.131 [2024-07-26 06:27:27.433471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.131 [2024-07-26 06:27:27.441099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.131 [2024-07-26 06:27:27.441459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.131 [2024-07-26 06:27:27.441498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.131 [2024-07-26 06:27:27.449206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.131 [2024-07-26 06:27:27.449565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.131 [2024-07-26 06:27:27.449603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.131 [2024-07-26 06:27:27.457965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.131 [2024-07-26 06:27:27.458316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.131 [2024-07-26 06:27:27.458372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.392 [2024-07-26 06:27:27.466218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.392 [2024-07-26 06:27:27.466644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.392 [2024-07-26 06:27:27.466697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.392 [2024-07-26 06:27:27.474367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.392 [2024-07-26 06:27:27.474762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.392 [2024-07-26 06:27:27.474821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.392 [2024-07-26 06:27:27.482754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.392 [2024-07-26 06:27:27.483144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.392 [2024-07-26 06:27:27.483199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.392 [2024-07-26 06:27:27.490982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.392 [2024-07-26 06:27:27.491331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.392 [2024-07-26 06:27:27.491385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.392 [2024-07-26 06:27:27.498636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.392 [2024-07-26 06:27:27.498996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.392 [2024-07-26 06:27:27.499035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.392 [2024-07-26 06:27:27.506284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.392 [2024-07-26 06:27:27.506633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.392 [2024-07-26 06:27:27.506672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.392 [2024-07-26 06:27:27.514656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.392 [2024-07-26 06:27:27.515013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.392 [2024-07-26 06:27:27.515077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.392 [2024-07-26 06:27:27.522594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.392 [2024-07-26 06:27:27.522952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.392 [2024-07-26 06:27:27.522991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.392 [2024-07-26 06:27:27.530856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.392 [2024-07-26 06:27:27.531226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.392 [2024-07-26 06:27:27.531282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.392 [2024-07-26 06:27:27.539049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.392 [2024-07-26 06:27:27.539411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.392 [2024-07-26 06:27:27.539466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.392 [2024-07-26 06:27:27.547280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.392 [2024-07-26 06:27:27.547648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.392 [2024-07-26 06:27:27.547703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.392 [2024-07-26 06:27:27.555581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.392 [2024-07-26 06:27:27.555971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.392 [2024-07-26 06:27:27.556012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.392 [2024-07-26 06:27:27.563332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.392 [2024-07-26 06:27:27.563699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.392 [2024-07-26 06:27:27.563759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.392 [2024-07-26 06:27:27.571745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.392 [2024-07-26 06:27:27.572134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.392 [2024-07-26 06:27:27.572190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.392 [2024-07-26 06:27:27.579915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.392 [2024-07-26 06:27:27.580276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.392 [2024-07-26 06:27:27.580340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.392 [2024-07-26 06:27:27.588614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.392 [2024-07-26 06:27:27.589129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.392 [2024-07-26 06:27:27.589170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.392 [2024-07-26 06:27:27.598072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.392 [2024-07-26 06:27:27.598552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.392 [2024-07-26 06:27:27.598607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.392 [2024-07-26 06:27:27.607790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.392 [2024-07-26 06:27:27.608241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.392 [2024-07-26 06:27:27.608295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.392 [2024-07-26 06:27:27.617366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.392 [2024-07-26 06:27:27.617762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.392 [2024-07-26 06:27:27.617816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.392 [2024-07-26 06:27:27.627449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.392 [2024-07-26 06:27:27.627907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.392 [2024-07-26 06:27:27.627948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.392 [2024-07-26 06:27:27.637302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.392 [2024-07-26 06:27:27.637665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.392 [2024-07-26 06:27:27.637704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.392 [2024-07-26 06:27:27.645371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.392 [2024-07-26 06:27:27.645730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.392 [2024-07-26 06:27:27.645769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.392 [2024-07-26 06:27:27.653333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.392 [2024-07-26 06:27:27.653693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.392 [2024-07-26 06:27:27.653734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.392 [2024-07-26 06:27:27.662251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.393 [2024-07-26 06:27:27.662749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.393 [2024-07-26 06:27:27.662802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.393 [2024-07-26 06:27:27.671181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.393 [2024-07-26 06:27:27.671547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.393 [2024-07-26 06:27:27.671587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.393 [2024-07-26 06:27:27.679472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.393 [2024-07-26 06:27:27.679928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.393 [2024-07-26 06:27:27.679982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.393 [2024-07-26 06:27:27.690117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.393 [2024-07-26 06:27:27.690625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.393 [2024-07-26 06:27:27.690680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.393 [2024-07-26 06:27:27.699511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.393 [2024-07-26 06:27:27.699876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.393 [2024-07-26 06:27:27.699916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.393 [2024-07-26 06:27:27.707590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.393 [2024-07-26 06:27:27.707950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.393 [2024-07-26 06:27:27.707993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.393 [2024-07-26 06:27:27.715721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.393 [2024-07-26 06:27:27.716113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.393 [2024-07-26 06:27:27.716168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.393 [2024-07-26 06:27:27.724439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.393 [2024-07-26 06:27:27.724806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.393 [2024-07-26 06:27:27.724845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.654 [2024-07-26 06:27:27.732495] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.654 [2024-07-26 06:27:27.732856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-26 06:27:27.732896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.654 [2024-07-26 06:27:27.740679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.654 [2024-07-26 06:27:27.741077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-26 06:27:27.741118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.654 [2024-07-26 06:27:27.749590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.654 [2024-07-26 06:27:27.749974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-26 06:27:27.750029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.654 [2024-07-26 06:27:27.758959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.654 [2024-07-26 06:27:27.759457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-26 06:27:27.759496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.654 [2024-07-26 06:27:27.768677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.654 [2024-07-26 06:27:27.769103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-26 06:27:27.769158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.654 [2024-07-26 06:27:27.778544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.654 [2024-07-26 06:27:27.779049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-26 06:27:27.779111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.654 [2024-07-26 06:27:27.788585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.654 [2024-07-26 06:27:27.789083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-26 06:27:27.789125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.654 [2024-07-26 06:27:27.798420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.654 [2024-07-26 06:27:27.798841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-26 06:27:27.798880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.654 [2024-07-26 06:27:27.808040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.654 [2024-07-26 06:27:27.808477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-26 06:27:27.808518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.654 [2024-07-26 06:27:27.817426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.654 [2024-07-26 06:27:27.817859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-26 06:27:27.817918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.654 [2024-07-26 06:27:27.827282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.654 [2024-07-26 06:27:27.827755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-26 06:27:27.827794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.654 [2024-07-26 06:27:27.837150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.654 [2024-07-26 06:27:27.837514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-26 06:27:27.837583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.654 [2024-07-26 06:27:27.846097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.654 [2024-07-26 06:27:27.846607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-26 06:27:27.846660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.654 [2024-07-26 06:27:27.855478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.654 [2024-07-26 06:27:27.855935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-26 06:27:27.855987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.654 [2024-07-26 06:27:27.865123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.654 [2024-07-26 06:27:27.865600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-26 06:27:27.865653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.654 [2024-07-26 06:27:27.874989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.654 [2024-07-26 06:27:27.875456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-26 06:27:27.875511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.654 [2024-07-26 06:27:27.884419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.654 [2024-07-26 06:27:27.884863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.654 [2024-07-26 06:27:27.884901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.654 [2024-07-26 06:27:27.894225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.654 [2024-07-26 06:27:27.894628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.655 [2024-07-26 06:27:27.894680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.655 [2024-07-26 06:27:27.903635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.655 [2024-07-26 06:27:27.904138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.655 [2024-07-26 06:27:27.904178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.655 [2024-07-26 06:27:27.912842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.655 [2024-07-26 06:27:27.913210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.655 [2024-07-26 06:27:27.913251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.655 [2024-07-26 06:27:27.921249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.655 [2024-07-26 06:27:27.921657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.655 [2024-07-26 06:27:27.921698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.655 [2024-07-26 06:27:27.930637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.655 [2024-07-26 06:27:27.931145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.655 [2024-07-26 06:27:27.931185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.655 [2024-07-26 06:27:27.940291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.655 [2024-07-26 06:27:27.940683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.655 [2024-07-26 06:27:27.940737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.655 [2024-07-26 06:27:27.948690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.655 [2024-07-26 06:27:27.949056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.655 [2024-07-26 06:27:27.949103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.655 [2024-07-26 06:27:27.957987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.655 [2024-07-26 06:27:27.958427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.655 [2024-07-26 06:27:27.958467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.655 [2024-07-26 06:27:27.967277] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.655 [2024-07-26 06:27:27.967694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.655 [2024-07-26 06:27:27.967732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.655 [2024-07-26 06:27:27.976045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.655 [2024-07-26 06:27:27.976401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.655 [2024-07-26 06:27:27.976442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.655 [2024-07-26 06:27:27.985208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.655 [2024-07-26 06:27:27.985608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.655 [2024-07-26 06:27:27.985664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.913 [2024-07-26 06:27:27.994731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.913 [2024-07-26 06:27:27.995235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.913 [2024-07-26 06:27:27.995300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.913 [2024-07-26 06:27:28.004594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:35:16.913 [2024-07-26 06:27:28.005108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.913 [2024-07-26 06:27:28.005148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.913 00:35:16.913 Latency(us) 00:35:16.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:16.913 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:16.913 nvme0n1 : 2.00 3325.29 415.66 0.00 0.00 4798.96 3446.71 14563.56 00:35:16.913 =================================================================================================================== 00:35:16.913 Total : 3325.29 415.66 0.00 0.00 4798.96 3446.71 14563.56 00:35:16.913 0 00:35:16.913 06:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:16.913 06:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:16.913 06:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:16.913 06:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:16.913 | .driver_specific 00:35:16.913 | .nvme_error 00:35:16.913 | .status_code 00:35:16.913 | .command_transient_transport_error' 00:35:17.173 06:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 214 > 0 )) 00:35:17.173 06:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 305503 00:35:17.173 06:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 305503 ']' 00:35:17.173 06:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 305503 00:35:17.173 06:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:17.173 06:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:17.173 06:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 305503 00:35:17.173 06:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:17.173 06:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:17.173 06:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 305503' 00:35:17.173 killing process with pid 305503 00:35:17.173 06:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 305503 00:35:17.173 Received shutdown signal, test time was about 2.000000 seconds 00:35:17.173 00:35:17.173 Latency(us) 00:35:17.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:17.173 =================================================================================================================== 00:35:17.173 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:17.173 06:27:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 305503 00:35:18.110 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:35:18.110 06:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 302949 00:35:18.110 06:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 302949 ']' 00:35:18.110 06:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 302949 00:35:18.110 06:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:18.110 06:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:18.110 06:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 302949 00:35:18.110 06:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:18.110 06:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:18.110 06:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 302949' 00:35:18.110 killing process with pid 302949 00:35:18.110 06:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 302949 00:35:18.110 06:27:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 302949 00:35:19.488 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:35:19.488 00:35:19.488 real 0m23.592s 00:35:19.488 user 0m45.319s 00:35:19.488 sys 0m4.809s 00:35:19.488 06:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:19.488 06:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.488 ************************************ 00:35:19.488 END TEST nvmf_digest_error 00:35:19.488 ************************************ 00:35:19.488 06:27:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:19.488 06:27:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:19.489 06:27:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:19.489 06:27:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:35:19.489 06:27:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:19.489 06:27:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:35:19.489 06:27:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:19.489 06:27:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:19.489 rmmod nvme_tcp 00:35:19.489 rmmod nvme_fabrics 00:35:19.489 rmmod nvme_keyring 00:35:19.489 06:27:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:19.489 06:27:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:35:19.489 06:27:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:35:19.489 06:27:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 302949 ']' 00:35:19.489 06:27:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 302949 00:35:19.489 06:27:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 302949 ']' 00:35:19.489 06:27:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 302949 00:35:19.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (302949) - No such process 00:35:19.489 06:27:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 302949 is not found' 00:35:19.489 Process with pid 302949 is not found 00:35:19.489 06:27:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:19.489 06:27:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:19.489 06:27:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:19.489 06:27:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:19.489 06:27:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:19.489 06:27:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.489 06:27:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:19.489 06:27:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.391 06:27:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:21.391 00:35:21.391 real 0m52.632s 00:35:21.391 user 1m34.434s 00:35:21.391 sys 0m10.698s 00:35:21.391 06:27:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:21.391 06:27:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:21.391 ************************************ 00:35:21.391 END TEST nvmf_digest 00:35:21.391 ************************************ 00:35:21.391 06:27:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:21.391 06:27:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:21.391 06:27:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:21.392 06:27:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:21.392 06:27:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:21.392 06:27:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:21.392 06:27:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.650 ************************************ 00:35:21.650 START TEST nvmf_bdevperf 00:35:21.650 ************************************ 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:21.650 * Looking for test storage... 00:35:21.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:21.650 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:21.651 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:21.651 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:21.651 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:21.651 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:21.651 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.651 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:21.651 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:21.651 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:35:21.651 06:27:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:23.552 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:23.552 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:23.552 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:23.552 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:23.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:23.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:35:23.552 00:35:23.552 --- 10.0.0.2 ping statistics --- 00:35:23.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:23.552 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:23.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:23.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:35:23.552 00:35:23.552 --- 10.0.0.1 ping statistics --- 00:35:23.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:23.552 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:23.552 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:35:23.553 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:23.553 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:23.553 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:23.553 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:23.553 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:23.553 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:23.553 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:23.553 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:23.553 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:23.553 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:23.553 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:23.553 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:23.553 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=308224 00:35:23.553 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:23.553 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 308224 00:35:23.553 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 308224 ']' 00:35:23.553 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:23.553 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:23.553 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:23.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:23.553 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:23.553 06:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:23.811 [2024-07-26 06:27:34.964895] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:23.812 [2024-07-26 06:27:34.965051] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:23.812 EAL: No free 2048 kB hugepages reported on node 1 00:35:23.812 [2024-07-26 06:27:35.115999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:24.070 [2024-07-26 06:27:35.373299] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:24.070 [2024-07-26 06:27:35.373379] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:24.070 [2024-07-26 06:27:35.373421] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:24.070 [2024-07-26 06:27:35.373457] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:24.070 [2024-07-26 06:27:35.373476] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:24.070 [2024-07-26 06:27:35.373790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:24.070 [2024-07-26 06:27:35.373822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:24.070 [2024-07-26 06:27:35.373831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:24.637 06:27:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:24.637 06:27:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:35:24.637 06:27:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:24.637 06:27:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:24.637 06:27:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:24.637 06:27:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:24.637 06:27:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:24.637 06:27:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.637 06:27:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:24.637 [2024-07-26 06:27:35.889622] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:24.637 06:27:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.637 06:27:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:24.637 06:27:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.637 06:27:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:24.896 Malloc0 00:35:24.896 06:27:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.896 06:27:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:24.896 06:27:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.896 06:27:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:24.896 06:27:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.896 06:27:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:24.896 06:27:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.896 06:27:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:24.896 06:27:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.896 06:27:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:24.896 06:27:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.896 06:27:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:24.896 [2024-07-26 06:27:35.999489] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:24.896 06:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.896 06:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:24.896 06:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:24.896 06:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:35:24.896 06:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:35:24.896 06:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:24.896 06:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:24.896 { 00:35:24.896 "params": { 00:35:24.896 "name": "Nvme$subsystem", 00:35:24.896 "trtype": "$TEST_TRANSPORT", 00:35:24.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:24.896 "adrfam": "ipv4", 00:35:24.896 "trsvcid": "$NVMF_PORT", 00:35:24.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:24.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:24.896 "hdgst": ${hdgst:-false}, 00:35:24.896 "ddgst": ${ddgst:-false} 00:35:24.896 }, 00:35:24.896 "method": "bdev_nvme_attach_controller" 00:35:24.896 } 00:35:24.896 EOF 00:35:24.896 )") 00:35:24.896 06:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:35:24.896 06:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:35:24.896 06:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:35:24.896 06:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:24.896 "params": { 00:35:24.896 "name": "Nvme1", 00:35:24.896 "trtype": "tcp", 00:35:24.896 "traddr": "10.0.0.2", 00:35:24.896 "adrfam": "ipv4", 00:35:24.896 "trsvcid": "4420", 00:35:24.896 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:24.896 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:24.896 "hdgst": false, 00:35:24.896 "ddgst": false 00:35:24.896 }, 00:35:24.896 "method": "bdev_nvme_attach_controller" 00:35:24.896 }' 00:35:24.896 [2024-07-26 06:27:36.083342] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:24.896 [2024-07-26 06:27:36.083491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid308378 ] 00:35:24.896 EAL: No free 2048 kB hugepages reported on node 1 00:35:24.896 [2024-07-26 06:27:36.207455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:25.155 [2024-07-26 06:27:36.446398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:25.720 Running I/O for 1 seconds... 00:35:26.664 00:35:26.664 Latency(us) 00:35:26.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:26.664 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:26.664 Verification LBA range: start 0x0 length 0x4000 00:35:26.664 Nvme1n1 : 1.01 6281.99 24.54 0.00 0.00 20287.91 1674.81 16408.27 00:35:26.664 =================================================================================================================== 00:35:26.664 Total : 6281.99 24.54 0.00 0.00 20287.91 1674.81 16408.27 00:35:27.638 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:35:27.638 06:27:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=308655 00:35:27.638 06:27:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:27.638 06:27:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:27.638 06:27:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:27.638 06:27:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:35:27.638 06:27:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:35:27.638 06:27:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:27.638 06:27:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:27.638 { 00:35:27.638 "params": { 00:35:27.638 "name": "Nvme$subsystem", 00:35:27.638 "trtype": "$TEST_TRANSPORT", 00:35:27.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:27.638 "adrfam": "ipv4", 00:35:27.638 "trsvcid": "$NVMF_PORT", 00:35:27.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:27.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:27.638 "hdgst": ${hdgst:-false}, 00:35:27.638 "ddgst": ${ddgst:-false} 00:35:27.638 }, 00:35:27.638 "method": "bdev_nvme_attach_controller" 00:35:27.638 } 00:35:27.638 EOF 00:35:27.638 )") 00:35:27.638 06:27:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:35:27.638 06:27:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:35:27.638 06:27:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:35:27.638 06:27:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:27.638 "params": { 00:35:27.638 "name": "Nvme1", 00:35:27.638 "trtype": "tcp", 00:35:27.638 "traddr": "10.0.0.2", 00:35:27.638 "adrfam": "ipv4", 00:35:27.638 "trsvcid": "4420", 00:35:27.638 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:27.639 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:27.639 "hdgst": false, 00:35:27.639 "ddgst": false 00:35:27.639 }, 00:35:27.639 "method": "bdev_nvme_attach_controller" 00:35:27.639 }' 00:35:27.639 [2024-07-26 06:27:38.943893] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:27.639 [2024-07-26 06:27:38.944091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid308655 ] 00:35:27.897 EAL: No free 2048 kB hugepages reported on node 1 00:35:27.897 [2024-07-26 06:27:39.077430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:28.155 [2024-07-26 06:27:39.313950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:28.721 Running I/O for 15 seconds... 00:35:30.626 06:27:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 308224 00:35:30.626 06:27:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:30.626 [2024-07-26 06:27:41.886168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.626 [2024-07-26 06:27:41.886252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.626 [2024-07-26 06:27:41.886312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.626 [2024-07-26 06:27:41.886341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.626 [2024-07-26 06:27:41.886384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.626 [2024-07-26 06:27:41.886405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.626 [2024-07-26 06:27:41.886445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.626 [2024-07-26 06:27:41.886471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.626 [2024-07-26 06:27:41.886499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.626 [2024-07-26 06:27:41.886524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.626 [2024-07-26 06:27:41.886561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.626 [2024-07-26 06:27:41.886586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.626 [2024-07-26 06:27:41.886612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.626 [2024-07-26 06:27:41.886636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.626 [2024-07-26 06:27:41.886662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.626 [2024-07-26 06:27:41.886685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.626 [2024-07-26 06:27:41.886711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.626 [2024-07-26 06:27:41.886735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.626 [2024-07-26 06:27:41.886761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.626 [2024-07-26 06:27:41.886784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.626 [2024-07-26 06:27:41.886810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.626 [2024-07-26 06:27:41.886834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.626 [2024-07-26 06:27:41.886860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.626 [2024-07-26 06:27:41.886884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.626 [2024-07-26 06:27:41.886928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.626 [2024-07-26 06:27:41.886953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.626 [2024-07-26 06:27:41.886980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.626 [2024-07-26 06:27:41.887004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.626 [2024-07-26 06:27:41.887030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.626 [2024-07-26 06:27:41.887054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.626 [2024-07-26 06:27:41.887091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.626 [2024-07-26 06:27:41.887130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.626 [2024-07-26 06:27:41.887155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.626 [2024-07-26 06:27:41.887176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.626 [2024-07-26 06:27:41.887200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.626 [2024-07-26 06:27:41.887226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.626 [2024-07-26 06:27:41.887250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.626 [2024-07-26 06:27:41.887271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.626 [2024-07-26 06:27:41.887295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.626 [2024-07-26 06:27:41.887316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.626 [2024-07-26 06:27:41.887340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.887376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.887398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.887432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.887461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.887484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.887509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.887532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.887558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.887580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.887607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.887630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.887655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.887678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.887703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.887726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.887752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.887776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.887801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.887824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.887855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.887879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.887905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.887929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.887955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.887978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.888004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.888027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.888053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.888085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.888126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.888149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.888173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.888193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.888215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.888235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.888258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.888278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.888301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.888321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.888360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.888381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.888402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.888441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.888467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.888494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.888521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.888544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.888571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.888594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.888620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.888643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.888669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.888692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.888718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.888742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.888767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.888790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.888816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.888840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.888866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.888890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.888916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.888940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.888967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.888990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.889018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.889043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.889076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.889117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.889142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.889167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.889191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.889211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.889234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.889254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.889277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.889297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.627 [2024-07-26 06:27:41.889320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.627 [2024-07-26 06:27:41.889356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.889379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.889414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.889442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.889466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.889492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.889515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.889541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.889564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.889590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.889614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.889640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.889664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.889690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.889713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.889739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.889762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.889798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.889823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.889850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.889874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.889900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.889923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.889950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.628 [2024-07-26 06:27:41.889973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.890000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.628 [2024-07-26 06:27:41.890023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.890049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.890080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.890107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.890145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.890168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.890188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.890225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.890247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.890270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.890290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.890313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.890333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.890373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.890392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.890432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.890460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.890487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.890511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.890537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.890560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.890586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.890610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.890637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.890660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.890686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.890710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.890735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.890759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.890784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.890807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.890834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.890857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.890883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.890906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.890932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.890954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.890980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.891003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.891029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.891052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.891089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.891132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.891157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.891178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.891201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.891221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.891244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.891264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.891287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.891307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.891330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.891369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.628 [2024-07-26 06:27:41.891391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.628 [2024-07-26 06:27:41.891427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.891455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.629 [2024-07-26 06:27:41.891479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.891505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.629 [2024-07-26 06:27:41.891528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.891554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.629 [2024-07-26 06:27:41.891577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.891604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.629 [2024-07-26 06:27:41.891627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.891653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.629 [2024-07-26 06:27:41.891676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.891702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.629 [2024-07-26 06:27:41.891725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.891757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.629 [2024-07-26 06:27:41.891781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.891808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.629 [2024-07-26 06:27:41.891831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.891857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.629 [2024-07-26 06:27:41.891882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.891908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.629 [2024-07-26 06:27:41.891931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.891958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.629 [2024-07-26 06:27:41.891981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.892007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.629 [2024-07-26 06:27:41.892030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.892056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.629 [2024-07-26 06:27:41.892089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.892130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.629 [2024-07-26 06:27:41.892152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.892175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.629 [2024-07-26 06:27:41.892195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.892217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.629 [2024-07-26 06:27:41.892238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.892261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.629 [2024-07-26 06:27:41.892282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.892304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.629 [2024-07-26 06:27:41.892325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.892367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.629 [2024-07-26 06:27:41.892390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.892431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.629 [2024-07-26 06:27:41.892455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.892480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.629 [2024-07-26 06:27:41.892504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.892530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.629 [2024-07-26 06:27:41.892553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.892579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.629 [2024-07-26 06:27:41.892602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.892628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.629 [2024-07-26 06:27:41.892651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.892679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.629 [2024-07-26 06:27:41.892702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.892728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.629 [2024-07-26 06:27:41.892751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.892777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.629 [2024-07-26 06:27:41.892801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.892825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:35:30.629 [2024-07-26 06:27:41.892855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:30.629 [2024-07-26 06:27:41.892876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:30.629 [2024-07-26 06:27:41.892897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97624 len:8 PRP1 0x0 PRP2 0x0 00:35:30.629 [2024-07-26 06:27:41.892919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.629 [2024-07-26 06:27:41.893240] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2c80 was disconnected and freed. reset controller. 00:35:30.629 [2024-07-26 06:27:41.897509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.629 [2024-07-26 06:27:41.897637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:30.629 [2024-07-26 06:27:41.898493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.629 [2024-07-26 06:27:41.898539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:30.629 [2024-07-26 06:27:41.898572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:30.629 [2024-07-26 06:27:41.898873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:30.629 [2024-07-26 06:27:41.899192] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.629 [2024-07-26 06:27:41.899244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.629 [2024-07-26 06:27:41.899269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.629 [2024-07-26 06:27:41.903479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.629 [2024-07-26 06:27:41.912541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.629 [2024-07-26 06:27:41.913037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.629 [2024-07-26 06:27:41.913087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:30.629 [2024-07-26 06:27:41.913114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:30.629 [2024-07-26 06:27:41.913407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:30.629 [2024-07-26 06:27:41.913699] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.629 [2024-07-26 06:27:41.913730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.629 [2024-07-26 06:27:41.913752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.629 [2024-07-26 06:27:41.917938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.629 [2024-07-26 06:27:41.927021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.629 [2024-07-26 06:27:41.927544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.629 [2024-07-26 06:27:41.927585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:30.630 [2024-07-26 06:27:41.927611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:30.630 [2024-07-26 06:27:41.927901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:30.630 [2024-07-26 06:27:41.928204] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.630 [2024-07-26 06:27:41.928236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.630 [2024-07-26 06:27:41.928258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.630 [2024-07-26 06:27:41.932427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.630 [2024-07-26 06:27:41.941586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.630 [2024-07-26 06:27:41.942067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.630 [2024-07-26 06:27:41.942109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:30.630 [2024-07-26 06:27:41.942135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:30.630 [2024-07-26 06:27:41.942429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:30.630 [2024-07-26 06:27:41.942734] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.630 [2024-07-26 06:27:41.942765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.630 [2024-07-26 06:27:41.942788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.630 [2024-07-26 06:27:41.947007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.630 [2024-07-26 06:27:41.956188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.630 [2024-07-26 06:27:41.956693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.630 [2024-07-26 06:27:41.956734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:30.630 [2024-07-26 06:27:41.956759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:30.630 [2024-07-26 06:27:41.957070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:30.630 [2024-07-26 06:27:41.957370] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.630 [2024-07-26 06:27:41.957401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.630 [2024-07-26 06:27:41.957422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.889 [2024-07-26 06:27:41.961589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.889 [2024-07-26 06:27:41.970709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.889 [2024-07-26 06:27:41.971220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.889 [2024-07-26 06:27:41.971262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:30.889 [2024-07-26 06:27:41.971287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:30.889 [2024-07-26 06:27:41.971576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:30.889 [2024-07-26 06:27:41.971867] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.889 [2024-07-26 06:27:41.971898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.889 [2024-07-26 06:27:41.971919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.889 [2024-07-26 06:27:41.976111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.889 [2024-07-26 06:27:41.985218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.889 [2024-07-26 06:27:41.985715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.889 [2024-07-26 06:27:41.985755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:30.889 [2024-07-26 06:27:41.985780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:30.889 [2024-07-26 06:27:41.986086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:30.889 [2024-07-26 06:27:41.986388] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.889 [2024-07-26 06:27:41.986420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.889 [2024-07-26 06:27:41.986441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.889 [2024-07-26 06:27:41.990614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.889 [2024-07-26 06:27:41.999706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.889 [2024-07-26 06:27:42.000223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.889 [2024-07-26 06:27:42.000264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:30.889 [2024-07-26 06:27:42.000290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:30.889 [2024-07-26 06:27:42.000580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:30.889 [2024-07-26 06:27:42.000872] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.889 [2024-07-26 06:27:42.000903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.889 [2024-07-26 06:27:42.000925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.889 [2024-07-26 06:27:42.005138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.889 [2024-07-26 06:27:42.014219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.889 [2024-07-26 06:27:42.014687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.889 [2024-07-26 06:27:42.014729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:30.889 [2024-07-26 06:27:42.014755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:30.889 [2024-07-26 06:27:42.015046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:30.889 [2024-07-26 06:27:42.015350] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.889 [2024-07-26 06:27:42.015386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.889 [2024-07-26 06:27:42.015408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.889 [2024-07-26 06:27:42.019593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.890 [2024-07-26 06:27:42.028934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.890 [2024-07-26 06:27:42.029452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.890 [2024-07-26 06:27:42.029494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:30.890 [2024-07-26 06:27:42.029520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:30.890 [2024-07-26 06:27:42.029811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:30.890 [2024-07-26 06:27:42.030115] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.890 [2024-07-26 06:27:42.030147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.890 [2024-07-26 06:27:42.030169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.890 [2024-07-26 06:27:42.034351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.890 [2024-07-26 06:27:42.043627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.890 [2024-07-26 06:27:42.044128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.890 [2024-07-26 06:27:42.044173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:30.890 [2024-07-26 06:27:42.044199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:30.890 [2024-07-26 06:27:42.044487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:30.890 [2024-07-26 06:27:42.044777] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.890 [2024-07-26 06:27:42.044809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.890 [2024-07-26 06:27:42.044830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.890 [2024-07-26 06:27:42.049006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.890 [2024-07-26 06:27:42.058321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.890 [2024-07-26 06:27:42.058818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.890 [2024-07-26 06:27:42.058870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:30.890 [2024-07-26 06:27:42.058893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:30.890 [2024-07-26 06:27:42.059223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:30.890 [2024-07-26 06:27:42.059524] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.890 [2024-07-26 06:27:42.059555] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.890 [2024-07-26 06:27:42.059577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.890 [2024-07-26 06:27:42.063750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.890 [2024-07-26 06:27:42.072835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.890 [2024-07-26 06:27:42.073337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.890 [2024-07-26 06:27:42.073383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:30.890 [2024-07-26 06:27:42.073416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:30.890 [2024-07-26 06:27:42.073702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:30.890 [2024-07-26 06:27:42.073991] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.890 [2024-07-26 06:27:42.074021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.890 [2024-07-26 06:27:42.074051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.890 [2024-07-26 06:27:42.078231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.890 [2024-07-26 06:27:42.087503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.890 [2024-07-26 06:27:42.088009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.890 [2024-07-26 06:27:42.088045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:30.890 [2024-07-26 06:27:42.088096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:30.890 [2024-07-26 06:27:42.088421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:30.890 [2024-07-26 06:27:42.088717] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.890 [2024-07-26 06:27:42.088748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.890 [2024-07-26 06:27:42.088769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.890 [2024-07-26 06:27:42.092916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.890 [2024-07-26 06:27:42.102166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.890 [2024-07-26 06:27:42.102653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.890 [2024-07-26 06:27:42.102694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:30.890 [2024-07-26 06:27:42.102719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:30.890 [2024-07-26 06:27:42.103005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:30.890 [2024-07-26 06:27:42.103306] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.890 [2024-07-26 06:27:42.103337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.890 [2024-07-26 06:27:42.103359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.890 [2024-07-26 06:27:42.107513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.890 [2024-07-26 06:27:42.116745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.890 [2024-07-26 06:27:42.117248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.890 [2024-07-26 06:27:42.117287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:30.890 [2024-07-26 06:27:42.117312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:30.890 [2024-07-26 06:27:42.117599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:30.890 [2024-07-26 06:27:42.117888] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.890 [2024-07-26 06:27:42.117919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.890 [2024-07-26 06:27:42.117941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.890 [2024-07-26 06:27:42.122087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.890 [2024-07-26 06:27:42.131324] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.890 [2024-07-26 06:27:42.131781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.890 [2024-07-26 06:27:42.131820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:30.890 [2024-07-26 06:27:42.131860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:30.890 [2024-07-26 06:27:42.132169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:30.890 [2024-07-26 06:27:42.132460] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.890 [2024-07-26 06:27:42.132491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.890 [2024-07-26 06:27:42.132512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.890 [2024-07-26 06:27:42.136669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.890 [2024-07-26 06:27:42.145917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.890 [2024-07-26 06:27:42.146425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.890 [2024-07-26 06:27:42.146461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:30.890 [2024-07-26 06:27:42.146484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:30.890 [2024-07-26 06:27:42.146766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:30.890 [2024-07-26 06:27:42.147054] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.890 [2024-07-26 06:27:42.147096] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.890 [2024-07-26 06:27:42.147119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.890 [2024-07-26 06:27:42.151250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.890 [2024-07-26 06:27:42.160487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.890 [2024-07-26 06:27:42.160960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.890 [2024-07-26 06:27:42.161001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:30.890 [2024-07-26 06:27:42.161026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:30.890 [2024-07-26 06:27:42.161320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:30.890 [2024-07-26 06:27:42.161609] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.890 [2024-07-26 06:27:42.161640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.890 [2024-07-26 06:27:42.161661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.890 [2024-07-26 06:27:42.165785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.891 [2024-07-26 06:27:42.174966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.891 [2024-07-26 06:27:42.175489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.891 [2024-07-26 06:27:42.175529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:30.891 [2024-07-26 06:27:42.175555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:30.891 [2024-07-26 06:27:42.175852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:30.891 [2024-07-26 06:27:42.176138] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.891 [2024-07-26 06:27:42.176166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.891 [2024-07-26 06:27:42.176184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.891 [2024-07-26 06:27:42.180311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.891 [2024-07-26 06:27:42.189605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.891 [2024-07-26 06:27:42.190095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.891 [2024-07-26 06:27:42.190138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:30.891 [2024-07-26 06:27:42.190162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:30.891 [2024-07-26 06:27:42.190450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:30.891 [2024-07-26 06:27:42.190740] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.891 [2024-07-26 06:27:42.190770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.891 [2024-07-26 06:27:42.190792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.891 [2024-07-26 06:27:42.194950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.891 [2024-07-26 06:27:42.204163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.891 [2024-07-26 06:27:42.204749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.891 [2024-07-26 06:27:42.204807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:30.891 [2024-07-26 06:27:42.204833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:30.891 [2024-07-26 06:27:42.205146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:30.891 [2024-07-26 06:27:42.205425] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.891 [2024-07-26 06:27:42.205456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.891 [2024-07-26 06:27:42.205477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:30.891 [2024-07-26 06:27:42.209598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:30.891 [2024-07-26 06:27:42.218814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:30.891 [2024-07-26 06:27:42.219322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.891 [2024-07-26 06:27:42.219364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:30.891 [2024-07-26 06:27:42.219390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:30.891 [2024-07-26 06:27:42.219678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:30.891 [2024-07-26 06:27:42.219968] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:30.891 [2024-07-26 06:27:42.219999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:30.891 [2024-07-26 06:27:42.220020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.150 [2024-07-26 06:27:42.224173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.150 [2024-07-26 06:27:42.233403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.150 [2024-07-26 06:27:42.233914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.150 [2024-07-26 06:27:42.233964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.150 [2024-07-26 06:27:42.233988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.150 [2024-07-26 06:27:42.234312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.150 [2024-07-26 06:27:42.234606] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.150 [2024-07-26 06:27:42.234638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.150 [2024-07-26 06:27:42.234659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.150 [2024-07-26 06:27:42.238795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.150 [2024-07-26 06:27:42.247998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.150 [2024-07-26 06:27:42.248459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.150 [2024-07-26 06:27:42.248500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.150 [2024-07-26 06:27:42.248525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.150 [2024-07-26 06:27:42.248811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.150 [2024-07-26 06:27:42.249114] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.150 [2024-07-26 06:27:42.249145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.150 [2024-07-26 06:27:42.249167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.150 [2024-07-26 06:27:42.253307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.150 [2024-07-26 06:27:42.262556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.150 [2024-07-26 06:27:42.263049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.150 [2024-07-26 06:27:42.263105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.150 [2024-07-26 06:27:42.263129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.150 [2024-07-26 06:27:42.263431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.150 [2024-07-26 06:27:42.263721] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.150 [2024-07-26 06:27:42.263751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.150 [2024-07-26 06:27:42.263773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.150 [2024-07-26 06:27:42.267926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.150 [2024-07-26 06:27:42.277164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.150 [2024-07-26 06:27:42.277632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.150 [2024-07-26 06:27:42.277672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.150 [2024-07-26 06:27:42.277697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.150 [2024-07-26 06:27:42.277984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.150 [2024-07-26 06:27:42.278290] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.150 [2024-07-26 06:27:42.278321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.150 [2024-07-26 06:27:42.278349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.150 [2024-07-26 06:27:42.282483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.150 [2024-07-26 06:27:42.291731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.150 [2024-07-26 06:27:42.292236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.150 [2024-07-26 06:27:42.292278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.150 [2024-07-26 06:27:42.292303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.150 [2024-07-26 06:27:42.292589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.150 [2024-07-26 06:27:42.292879] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.150 [2024-07-26 06:27:42.292909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.150 [2024-07-26 06:27:42.292930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.150 [2024-07-26 06:27:42.297067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.150 [2024-07-26 06:27:42.305977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.150 [2024-07-26 06:27:42.306462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.150 [2024-07-26 06:27:42.306502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.150 [2024-07-26 06:27:42.306527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.150 [2024-07-26 06:27:42.306813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.150 [2024-07-26 06:27:42.307129] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.150 [2024-07-26 06:27:42.307158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.150 [2024-07-26 06:27:42.307177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.150 [2024-07-26 06:27:42.311312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.150 [2024-07-26 06:27:42.320632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.151 [2024-07-26 06:27:42.321153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.151 [2024-07-26 06:27:42.321189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.151 [2024-07-26 06:27:42.321212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.151 [2024-07-26 06:27:42.321502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.151 [2024-07-26 06:27:42.321792] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.151 [2024-07-26 06:27:42.321823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.151 [2024-07-26 06:27:42.321844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.151 [2024-07-26 06:27:42.326001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.151 [2024-07-26 06:27:42.335270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.151 [2024-07-26 06:27:42.335906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.151 [2024-07-26 06:27:42.335970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.151 [2024-07-26 06:27:42.335996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.151 [2024-07-26 06:27:42.336303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.151 [2024-07-26 06:27:42.336630] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.151 [2024-07-26 06:27:42.336662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.151 [2024-07-26 06:27:42.336683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.151 [2024-07-26 06:27:42.340786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.151 [2024-07-26 06:27:42.349968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.151 [2024-07-26 06:27:42.350546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.151 [2024-07-26 06:27:42.350583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.151 [2024-07-26 06:27:42.350605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.151 [2024-07-26 06:27:42.350897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.151 [2024-07-26 06:27:42.351205] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.151 [2024-07-26 06:27:42.351234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.151 [2024-07-26 06:27:42.351253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.151 [2024-07-26 06:27:42.355493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.151 [2024-07-26 06:27:42.364634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.151 [2024-07-26 06:27:42.365142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.151 [2024-07-26 06:27:42.365178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.151 [2024-07-26 06:27:42.365201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.151 [2024-07-26 06:27:42.365506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.151 [2024-07-26 06:27:42.365795] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.151 [2024-07-26 06:27:42.365835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.151 [2024-07-26 06:27:42.365856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.151 [2024-07-26 06:27:42.370071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.151 [2024-07-26 06:27:42.379138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.151 [2024-07-26 06:27:42.379677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.151 [2024-07-26 06:27:42.379717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.151 [2024-07-26 06:27:42.379742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.151 [2024-07-26 06:27:42.380029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.151 [2024-07-26 06:27:42.380348] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.151 [2024-07-26 06:27:42.380380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.151 [2024-07-26 06:27:42.380402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.151 [2024-07-26 06:27:42.384562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.151 [2024-07-26 06:27:42.393613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.151 [2024-07-26 06:27:42.394129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.151 [2024-07-26 06:27:42.394170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.151 [2024-07-26 06:27:42.394196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.151 [2024-07-26 06:27:42.394486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.151 [2024-07-26 06:27:42.394779] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.151 [2024-07-26 06:27:42.394810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.151 [2024-07-26 06:27:42.394832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.151 [2024-07-26 06:27:42.399018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.151 [2024-07-26 06:27:42.408284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.151 [2024-07-26 06:27:42.408762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.151 [2024-07-26 06:27:42.408797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.151 [2024-07-26 06:27:42.408834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.151 [2024-07-26 06:27:42.409140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.151 [2024-07-26 06:27:42.409430] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.151 [2024-07-26 06:27:42.409461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.151 [2024-07-26 06:27:42.409483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.151 [2024-07-26 06:27:42.413637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.151 [2024-07-26 06:27:42.422870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.151 [2024-07-26 06:27:42.423334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.151 [2024-07-26 06:27:42.423375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.151 [2024-07-26 06:27:42.423400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.151 [2024-07-26 06:27:42.423687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.151 [2024-07-26 06:27:42.423977] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.151 [2024-07-26 06:27:42.424008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.151 [2024-07-26 06:27:42.424035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.151 [2024-07-26 06:27:42.428205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.151 [2024-07-26 06:27:42.437466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.151 [2024-07-26 06:27:42.437974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.151 [2024-07-26 06:27:42.438023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.151 [2024-07-26 06:27:42.438047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.151 [2024-07-26 06:27:42.438361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.151 [2024-07-26 06:27:42.438649] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.151 [2024-07-26 06:27:42.438681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.151 [2024-07-26 06:27:42.438702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.151 [2024-07-26 06:27:42.442841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.151 [2024-07-26 06:27:42.452126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.151 [2024-07-26 06:27:42.452590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.151 [2024-07-26 06:27:42.452631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.151 [2024-07-26 06:27:42.452655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.151 [2024-07-26 06:27:42.452942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.151 [2024-07-26 06:27:42.453244] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.151 [2024-07-26 06:27:42.453276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.151 [2024-07-26 06:27:42.453297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.151 [2024-07-26 06:27:42.457452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.151 [2024-07-26 06:27:42.466692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.152 [2024-07-26 06:27:42.467159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.152 [2024-07-26 06:27:42.467200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.152 [2024-07-26 06:27:42.467226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.152 [2024-07-26 06:27:42.467514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.152 [2024-07-26 06:27:42.467804] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.152 [2024-07-26 06:27:42.467835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.152 [2024-07-26 06:27:42.467856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.152 [2024-07-26 06:27:42.472003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.152 [2024-07-26 06:27:42.481272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.152 [2024-07-26 06:27:42.481745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.152 [2024-07-26 06:27:42.481797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.152 [2024-07-26 06:27:42.481820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.152 [2024-07-26 06:27:42.482135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.152 [2024-07-26 06:27:42.482425] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.152 [2024-07-26 06:27:42.482456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.152 [2024-07-26 06:27:42.482477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.411 [2024-07-26 06:27:42.486636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.411 [2024-07-26 06:27:42.495859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.411 [2024-07-26 06:27:42.496332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.411 [2024-07-26 06:27:42.496383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.411 [2024-07-26 06:27:42.496419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.411 [2024-07-26 06:27:42.496721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.411 [2024-07-26 06:27:42.497009] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.411 [2024-07-26 06:27:42.497040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.411 [2024-07-26 06:27:42.497073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.411 [2024-07-26 06:27:42.501227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.411 [2024-07-26 06:27:42.510466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.411 [2024-07-26 06:27:42.510957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.411 [2024-07-26 06:27:42.510997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.411 [2024-07-26 06:27:42.511023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.411 [2024-07-26 06:27:42.511318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.411 [2024-07-26 06:27:42.511608] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.411 [2024-07-26 06:27:42.511638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.411 [2024-07-26 06:27:42.511660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.411 [2024-07-26 06:27:42.515803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.411 [2024-07-26 06:27:42.525046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.411 [2024-07-26 06:27:42.525552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.411 [2024-07-26 06:27:42.525602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.411 [2024-07-26 06:27:42.525626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.411 [2024-07-26 06:27:42.525936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.411 [2024-07-26 06:27:42.526238] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.411 [2024-07-26 06:27:42.526270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.411 [2024-07-26 06:27:42.526300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.411 [2024-07-26 06:27:42.530449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.411 [2024-07-26 06:27:42.539684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.411 [2024-07-26 06:27:42.540151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.411 [2024-07-26 06:27:42.540192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.411 [2024-07-26 06:27:42.540217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.411 [2024-07-26 06:27:42.540504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.411 [2024-07-26 06:27:42.540792] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.411 [2024-07-26 06:27:42.540823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.411 [2024-07-26 06:27:42.540860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.411 [2024-07-26 06:27:42.545003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.411 [2024-07-26 06:27:42.554219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.411 [2024-07-26 06:27:42.554702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.411 [2024-07-26 06:27:42.554752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.411 [2024-07-26 06:27:42.554775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.411 [2024-07-26 06:27:42.555089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.411 [2024-07-26 06:27:42.555382] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.411 [2024-07-26 06:27:42.555414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.411 [2024-07-26 06:27:42.555435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.411 [2024-07-26 06:27:42.559583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.411 [2024-07-26 06:27:42.568816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.411 [2024-07-26 06:27:42.569291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.411 [2024-07-26 06:27:42.569340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.411 [2024-07-26 06:27:42.569376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.411 [2024-07-26 06:27:42.569674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.411 [2024-07-26 06:27:42.569963] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.411 [2024-07-26 06:27:42.569994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.411 [2024-07-26 06:27:42.570023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.411 [2024-07-26 06:27:42.574175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.411 [2024-07-26 06:27:42.583425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.411 [2024-07-26 06:27:42.583922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.412 [2024-07-26 06:27:42.583962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.412 [2024-07-26 06:27:42.583987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.412 [2024-07-26 06:27:42.584285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.412 [2024-07-26 06:27:42.584575] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.412 [2024-07-26 06:27:42.584606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.412 [2024-07-26 06:27:42.584627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.412 [2024-07-26 06:27:42.588795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.412 [2024-07-26 06:27:42.598030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.412 [2024-07-26 06:27:42.598518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.412 [2024-07-26 06:27:42.598559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.412 [2024-07-26 06:27:42.598584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.412 [2024-07-26 06:27:42.598870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.412 [2024-07-26 06:27:42.599172] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.412 [2024-07-26 06:27:42.599204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.412 [2024-07-26 06:27:42.599225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.412 [2024-07-26 06:27:42.603359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.412 [2024-07-26 06:27:42.612585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.412 [2024-07-26 06:27:42.613043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.412 [2024-07-26 06:27:42.613094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.412 [2024-07-26 06:27:42.613121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.412 [2024-07-26 06:27:42.613408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.412 [2024-07-26 06:27:42.613699] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.412 [2024-07-26 06:27:42.613730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.412 [2024-07-26 06:27:42.613751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.412 [2024-07-26 06:27:42.617895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.412 [2024-07-26 06:27:42.627112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.412 [2024-07-26 06:27:42.627625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.412 [2024-07-26 06:27:42.627662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.412 [2024-07-26 06:27:42.627685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.412 [2024-07-26 06:27:42.627986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.412 [2024-07-26 06:27:42.628288] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.412 [2024-07-26 06:27:42.628320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.412 [2024-07-26 06:27:42.628341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.412 [2024-07-26 06:27:42.632480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.412 [2024-07-26 06:27:42.641739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.412 [2024-07-26 06:27:42.642234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.412 [2024-07-26 06:27:42.642274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.412 [2024-07-26 06:27:42.642299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.412 [2024-07-26 06:27:42.642586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.412 [2024-07-26 06:27:42.642875] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.412 [2024-07-26 06:27:42.642906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.412 [2024-07-26 06:27:42.642927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.412 [2024-07-26 06:27:42.647079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.412 [2024-07-26 06:27:42.656346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.412 [2024-07-26 06:27:42.656826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.412 [2024-07-26 06:27:42.656866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.412 [2024-07-26 06:27:42.656891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.412 [2024-07-26 06:27:42.657191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.412 [2024-07-26 06:27:42.657481] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.412 [2024-07-26 06:27:42.657511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.412 [2024-07-26 06:27:42.657532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.412 [2024-07-26 06:27:42.661676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.412 [2024-07-26 06:27:42.670948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.412 [2024-07-26 06:27:42.671427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.412 [2024-07-26 06:27:42.671467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.412 [2024-07-26 06:27:42.671492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.412 [2024-07-26 06:27:42.671785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.412 [2024-07-26 06:27:42.672090] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.412 [2024-07-26 06:27:42.672121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.412 [2024-07-26 06:27:42.672142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.412 [2024-07-26 06:27:42.676284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.412 [2024-07-26 06:27:42.685514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.412 [2024-07-26 06:27:42.686005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.412 [2024-07-26 06:27:42.686056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.412 [2024-07-26 06:27:42.686091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.412 [2024-07-26 06:27:42.686411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.412 [2024-07-26 06:27:42.686703] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.412 [2024-07-26 06:27:42.686735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.412 [2024-07-26 06:27:42.686756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.412 [2024-07-26 06:27:42.690890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.412 [2024-07-26 06:27:42.700132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.412 [2024-07-26 06:27:42.700613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.412 [2024-07-26 06:27:42.700653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.412 [2024-07-26 06:27:42.700679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.412 [2024-07-26 06:27:42.700967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.412 [2024-07-26 06:27:42.701270] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.412 [2024-07-26 06:27:42.701302] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.412 [2024-07-26 06:27:42.701324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.412 [2024-07-26 06:27:42.705465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.412 [2024-07-26 06:27:42.714719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.412 [2024-07-26 06:27:42.715202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.412 [2024-07-26 06:27:42.715252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.412 [2024-07-26 06:27:42.715275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.412 [2024-07-26 06:27:42.715569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.412 [2024-07-26 06:27:42.715860] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.412 [2024-07-26 06:27:42.715891] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.412 [2024-07-26 06:27:42.715919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.412 [2024-07-26 06:27:42.720080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.412 [2024-07-26 06:27:42.729319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.413 [2024-07-26 06:27:42.729801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.413 [2024-07-26 06:27:42.729841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.413 [2024-07-26 06:27:42.729866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.413 [2024-07-26 06:27:42.730171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.413 [2024-07-26 06:27:42.730460] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.413 [2024-07-26 06:27:42.730491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.413 [2024-07-26 06:27:42.730512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.413 [2024-07-26 06:27:42.734659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.413 [2024-07-26 06:27:42.743872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.672 [2024-07-26 06:27:42.744349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.672 [2024-07-26 06:27:42.744389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.672 [2024-07-26 06:27:42.744415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.672 [2024-07-26 06:27:42.744701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.672 [2024-07-26 06:27:42.744990] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.672 [2024-07-26 06:27:42.745021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.672 [2024-07-26 06:27:42.745042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.672 [2024-07-26 06:27:42.749199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.672 [2024-07-26 06:27:42.758441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.672 [2024-07-26 06:27:42.758895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.672 [2024-07-26 06:27:42.758934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.672 [2024-07-26 06:27:42.758959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.672 [2024-07-26 06:27:42.759258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.672 [2024-07-26 06:27:42.759547] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.672 [2024-07-26 06:27:42.759579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.672 [2024-07-26 06:27:42.759600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.672 [2024-07-26 06:27:42.763730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.672 [2024-07-26 06:27:42.772963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.672 [2024-07-26 06:27:42.773461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.672 [2024-07-26 06:27:42.773501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.672 [2024-07-26 06:27:42.773526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.672 [2024-07-26 06:27:42.773813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.672 [2024-07-26 06:27:42.774115] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.672 [2024-07-26 06:27:42.774147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.672 [2024-07-26 06:27:42.774168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.672 [2024-07-26 06:27:42.778328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.672 [2024-07-26 06:27:42.787602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.672 [2024-07-26 06:27:42.788075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.672 [2024-07-26 06:27:42.788116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.672 [2024-07-26 06:27:42.788141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.672 [2024-07-26 06:27:42.788428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.672 [2024-07-26 06:27:42.788717] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.672 [2024-07-26 06:27:42.788748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.672 [2024-07-26 06:27:42.788769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.672 [2024-07-26 06:27:42.792929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.672 [2024-07-26 06:27:42.802191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.672 [2024-07-26 06:27:42.802677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.672 [2024-07-26 06:27:42.802717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.672 [2024-07-26 06:27:42.802742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.672 [2024-07-26 06:27:42.803028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.672 [2024-07-26 06:27:42.803329] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.672 [2024-07-26 06:27:42.803360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.672 [2024-07-26 06:27:42.803382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.672 [2024-07-26 06:27:42.807503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.672 [2024-07-26 06:27:42.816706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.672 [2024-07-26 06:27:42.817225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.672 [2024-07-26 06:27:42.817268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.672 [2024-07-26 06:27:42.817294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.672 [2024-07-26 06:27:42.817589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.672 [2024-07-26 06:27:42.817879] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.672 [2024-07-26 06:27:42.817910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.672 [2024-07-26 06:27:42.817931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.672 [2024-07-26 06:27:42.822086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.672 [2024-07-26 06:27:42.831322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.672 [2024-07-26 06:27:42.831813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.672 [2024-07-26 06:27:42.831862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.673 [2024-07-26 06:27:42.831886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.673 [2024-07-26 06:27:42.832203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.673 [2024-07-26 06:27:42.832494] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.673 [2024-07-26 06:27:42.832525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.673 [2024-07-26 06:27:42.832547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.673 [2024-07-26 06:27:42.836693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.673 [2024-07-26 06:27:42.845929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.673 [2024-07-26 06:27:42.846398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.673 [2024-07-26 06:27:42.846439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.673 [2024-07-26 06:27:42.846464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.673 [2024-07-26 06:27:42.846750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.673 [2024-07-26 06:27:42.847040] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.673 [2024-07-26 06:27:42.847083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.673 [2024-07-26 06:27:42.847106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.673 [2024-07-26 06:27:42.851260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.673 [2024-07-26 06:27:42.860506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.673 [2024-07-26 06:27:42.861010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.673 [2024-07-26 06:27:42.861051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.673 [2024-07-26 06:27:42.861089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.673 [2024-07-26 06:27:42.861379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.673 [2024-07-26 06:27:42.861670] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.673 [2024-07-26 06:27:42.861700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.673 [2024-07-26 06:27:42.861727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.673 [2024-07-26 06:27:42.865869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.673 [2024-07-26 06:27:42.875101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.673 [2024-07-26 06:27:42.875604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.673 [2024-07-26 06:27:42.875654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.673 [2024-07-26 06:27:42.875677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.673 [2024-07-26 06:27:42.875981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.673 [2024-07-26 06:27:42.876283] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.673 [2024-07-26 06:27:42.876315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.673 [2024-07-26 06:27:42.876337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.673 [2024-07-26 06:27:42.880469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.673 [2024-07-26 06:27:42.889706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.673 [2024-07-26 06:27:42.890207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.673 [2024-07-26 06:27:42.890248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.673 [2024-07-26 06:27:42.890274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.673 [2024-07-26 06:27:42.890559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.673 [2024-07-26 06:27:42.890847] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.673 [2024-07-26 06:27:42.890878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.673 [2024-07-26 06:27:42.890899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.673 [2024-07-26 06:27:42.895024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.673 [2024-07-26 06:27:42.904478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.673 [2024-07-26 06:27:42.904949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.673 [2024-07-26 06:27:42.904990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.673 [2024-07-26 06:27:42.905015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.673 [2024-07-26 06:27:42.905312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.673 [2024-07-26 06:27:42.905601] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.673 [2024-07-26 06:27:42.905632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.673 [2024-07-26 06:27:42.905654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.673 [2024-07-26 06:27:42.909795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.673 [2024-07-26 06:27:42.919040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.673 [2024-07-26 06:27:42.919543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.673 [2024-07-26 06:27:42.919583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.673 [2024-07-26 06:27:42.919608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.673 [2024-07-26 06:27:42.919895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.673 [2024-07-26 06:27:42.920200] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.673 [2024-07-26 06:27:42.920231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.673 [2024-07-26 06:27:42.920252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.673 [2024-07-26 06:27:42.924391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.673 [2024-07-26 06:27:42.933606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.673 [2024-07-26 06:27:42.934109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.673 [2024-07-26 06:27:42.934160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.673 [2024-07-26 06:27:42.934184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.673 [2024-07-26 06:27:42.934491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.673 [2024-07-26 06:27:42.934779] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.673 [2024-07-26 06:27:42.934810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.673 [2024-07-26 06:27:42.934832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.673 [2024-07-26 06:27:42.938969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.673 [2024-07-26 06:27:42.948188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.673 [2024-07-26 06:27:42.948669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.673 [2024-07-26 06:27:42.948709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.673 [2024-07-26 06:27:42.948734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.673 [2024-07-26 06:27:42.949019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.673 [2024-07-26 06:27:42.949319] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.673 [2024-07-26 06:27:42.949351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.673 [2024-07-26 06:27:42.949372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.673 [2024-07-26 06:27:42.953505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.673 [2024-07-26 06:27:42.962729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.673 [2024-07-26 06:27:42.963214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.673 [2024-07-26 06:27:42.963290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.673 [2024-07-26 06:27:42.963317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.673 [2024-07-26 06:27:42.963613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.673 [2024-07-26 06:27:42.963904] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.673 [2024-07-26 06:27:42.963935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.673 [2024-07-26 06:27:42.963957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.673 [2024-07-26 06:27:42.968113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.673 [2024-07-26 06:27:42.977340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.674 [2024-07-26 06:27:42.977832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.674 [2024-07-26 06:27:42.977873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.674 [2024-07-26 06:27:42.977898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.674 [2024-07-26 06:27:42.978203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.674 [2024-07-26 06:27:42.978494] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.674 [2024-07-26 06:27:42.978525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.674 [2024-07-26 06:27:42.978546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.674 [2024-07-26 06:27:42.982691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.674 [2024-07-26 06:27:42.991949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.674 [2024-07-26 06:27:42.992423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.674 [2024-07-26 06:27:42.992464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.674 [2024-07-26 06:27:42.992489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.674 [2024-07-26 06:27:42.992775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.674 [2024-07-26 06:27:42.993076] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.674 [2024-07-26 06:27:42.993107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.674 [2024-07-26 06:27:42.993128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.674 [2024-07-26 06:27:42.997269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.933 [2024-07-26 06:27:43.006496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.933 [2024-07-26 06:27:43.007121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-07-26 06:27:43.007163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.933 [2024-07-26 06:27:43.007188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.933 [2024-07-26 06:27:43.007475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.933 [2024-07-26 06:27:43.007765] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.933 [2024-07-26 06:27:43.007801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.933 [2024-07-26 06:27:43.007824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.933 [2024-07-26 06:27:43.011970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.933 [2024-07-26 06:27:43.020950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.933 [2024-07-26 06:27:43.021474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-07-26 06:27:43.021515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.933 [2024-07-26 06:27:43.021540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.933 [2024-07-26 06:27:43.021828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.933 [2024-07-26 06:27:43.022132] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.933 [2024-07-26 06:27:43.022164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.933 [2024-07-26 06:27:43.022186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.933 [2024-07-26 06:27:43.026317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.933 [2024-07-26 06:27:43.035544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.933 [2024-07-26 06:27:43.036072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-07-26 06:27:43.036112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.933 [2024-07-26 06:27:43.036137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.933 [2024-07-26 06:27:43.036425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.933 [2024-07-26 06:27:43.036713] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.933 [2024-07-26 06:27:43.036744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.933 [2024-07-26 06:27:43.036765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.933 [2024-07-26 06:27:43.040913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.933 [2024-07-26 06:27:43.050154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.933 [2024-07-26 06:27:43.050717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-07-26 06:27:43.050775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.933 [2024-07-26 06:27:43.050799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.933 [2024-07-26 06:27:43.051129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.933 [2024-07-26 06:27:43.051419] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.933 [2024-07-26 06:27:43.051450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.933 [2024-07-26 06:27:43.051471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.933 [2024-07-26 06:27:43.055625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.933 [2024-07-26 06:27:43.064617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.933 [2024-07-26 06:27:43.065080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-07-26 06:27:43.065133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.933 [2024-07-26 06:27:43.065158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.933 [2024-07-26 06:27:43.065445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.933 [2024-07-26 06:27:43.065735] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.933 [2024-07-26 06:27:43.065766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.933 [2024-07-26 06:27:43.065788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.933 [2024-07-26 06:27:43.069935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.933 [2024-07-26 06:27:43.079161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.933 [2024-07-26 06:27:43.079648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-07-26 06:27:43.079689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.933 [2024-07-26 06:27:43.079714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.933 [2024-07-26 06:27:43.080000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.933 [2024-07-26 06:27:43.080297] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.933 [2024-07-26 06:27:43.080330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.933 [2024-07-26 06:27:43.080352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.933 [2024-07-26 06:27:43.084471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.933 [2024-07-26 06:27:43.093724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.933 [2024-07-26 06:27:43.094217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-07-26 06:27:43.094267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.933 [2024-07-26 06:27:43.094291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.933 [2024-07-26 06:27:43.094594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.933 [2024-07-26 06:27:43.094884] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.933 [2024-07-26 06:27:43.094914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.933 [2024-07-26 06:27:43.094936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.933 [2024-07-26 06:27:43.099073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.933 [2024-07-26 06:27:43.108308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.933 [2024-07-26 06:27:43.108791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-07-26 06:27:43.108832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.934 [2024-07-26 06:27:43.108857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.934 [2024-07-26 06:27:43.109160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.934 [2024-07-26 06:27:43.109453] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.934 [2024-07-26 06:27:43.109484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.934 [2024-07-26 06:27:43.109505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.934 [2024-07-26 06:27:43.113642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.934 [2024-07-26 06:27:43.122936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.934 [2024-07-26 06:27:43.123443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-07-26 06:27:43.123484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.934 [2024-07-26 06:27:43.123510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.934 [2024-07-26 06:27:43.123798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.934 [2024-07-26 06:27:43.124104] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.934 [2024-07-26 06:27:43.124136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.934 [2024-07-26 06:27:43.124158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.934 [2024-07-26 06:27:43.128308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.934 [2024-07-26 06:27:43.137546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.934 [2024-07-26 06:27:43.138012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-07-26 06:27:43.138053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.934 [2024-07-26 06:27:43.138093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.934 [2024-07-26 06:27:43.138380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.934 [2024-07-26 06:27:43.138668] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.934 [2024-07-26 06:27:43.138699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.934 [2024-07-26 06:27:43.138720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.934 [2024-07-26 06:27:43.142855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.934 [2024-07-26 06:27:43.152110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.934 [2024-07-26 06:27:43.152561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-07-26 06:27:43.152600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.934 [2024-07-26 06:27:43.152625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.934 [2024-07-26 06:27:43.152910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.934 [2024-07-26 06:27:43.153214] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.934 [2024-07-26 06:27:43.153253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.934 [2024-07-26 06:27:43.153277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.934 [2024-07-26 06:27:43.157439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.934 [2024-07-26 06:27:43.166715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.934 [2024-07-26 06:27:43.167235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-07-26 06:27:43.167276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.934 [2024-07-26 06:27:43.167302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.934 [2024-07-26 06:27:43.167603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.934 [2024-07-26 06:27:43.167892] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.934 [2024-07-26 06:27:43.167923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.934 [2024-07-26 06:27:43.167945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.934 [2024-07-26 06:27:43.172098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.934 [2024-07-26 06:27:43.181358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.934 [2024-07-26 06:27:43.181809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-07-26 06:27:43.181849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.934 [2024-07-26 06:27:43.181874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.934 [2024-07-26 06:27:43.182175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.934 [2024-07-26 06:27:43.182466] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.934 [2024-07-26 06:27:43.182499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.934 [2024-07-26 06:27:43.182520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.934 [2024-07-26 06:27:43.186613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.934 [2024-07-26 06:27:43.195827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.934 [2024-07-26 06:27:43.196336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-07-26 06:27:43.196378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.934 [2024-07-26 06:27:43.196403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.934 [2024-07-26 06:27:43.196690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.934 [2024-07-26 06:27:43.196920] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.934 [2024-07-26 06:27:43.196945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.934 [2024-07-26 06:27:43.196962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.934 [2024-07-26 06:27:43.201089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.934 [2024-07-26 06:27:43.210341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.934 [2024-07-26 06:27:43.210894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-07-26 06:27:43.210934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.934 [2024-07-26 06:27:43.210959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.934 [2024-07-26 06:27:43.211284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.934 [2024-07-26 06:27:43.211535] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.934 [2024-07-26 06:27:43.211560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.934 [2024-07-26 06:27:43.211577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.934 [2024-07-26 06:27:43.215648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.934 [2024-07-26 06:27:43.224891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.934 [2024-07-26 06:27:43.225402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-07-26 06:27:43.225438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.934 [2024-07-26 06:27:43.225460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.934 [2024-07-26 06:27:43.225729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.934 [2024-07-26 06:27:43.225960] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.934 [2024-07-26 06:27:43.225985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.934 [2024-07-26 06:27:43.226002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.934 [2024-07-26 06:27:43.230137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.934 [2024-07-26 06:27:43.239386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.934 [2024-07-26 06:27:43.239866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-07-26 06:27:43.239907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.934 [2024-07-26 06:27:43.239933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.934 [2024-07-26 06:27:43.240238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.934 [2024-07-26 06:27:43.240488] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.934 [2024-07-26 06:27:43.240513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.934 [2024-07-26 06:27:43.240530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.934 [2024-07-26 06:27:43.244767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:31.934 [2024-07-26 06:27:43.253990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:31.934 [2024-07-26 06:27:43.254505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-07-26 06:27:43.254559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:31.935 [2024-07-26 06:27:43.254590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:31.935 [2024-07-26 06:27:43.254878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:31.935 [2024-07-26 06:27:43.255143] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:31.935 [2024-07-26 06:27:43.255170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:31.935 [2024-07-26 06:27:43.255188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:31.935 [2024-07-26 06:27:43.259279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.197 [2024-07-26 06:27:43.268484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.197 [2024-07-26 06:27:43.269033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.197 [2024-07-26 06:27:43.269075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.197 [2024-07-26 06:27:43.269106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.197 [2024-07-26 06:27:43.269394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.198 [2024-07-26 06:27:43.269625] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.198 [2024-07-26 06:27:43.269666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.198 [2024-07-26 06:27:43.269683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.198 [2024-07-26 06:27:43.273821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.198 [2024-07-26 06:27:43.283098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.198 [2024-07-26 06:27:43.283557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.198 [2024-07-26 06:27:43.283606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.198 [2024-07-26 06:27:43.283627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.198 [2024-07-26 06:27:43.283931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.198 [2024-07-26 06:27:43.284171] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.198 [2024-07-26 06:27:43.284198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.198 [2024-07-26 06:27:43.284216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.198 [2024-07-26 06:27:43.288336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.198 [2024-07-26 06:27:43.297563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.198 [2024-07-26 06:27:43.298110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.198 [2024-07-26 06:27:43.298146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.198 [2024-07-26 06:27:43.298168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.198 [2024-07-26 06:27:43.298454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.198 [2024-07-26 06:27:43.298684] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.198 [2024-07-26 06:27:43.298713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.198 [2024-07-26 06:27:43.298731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.198 [2024-07-26 06:27:43.302811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.198 [2024-07-26 06:27:43.312055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.198 [2024-07-26 06:27:43.312539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.198 [2024-07-26 06:27:43.312580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.198 [2024-07-26 06:27:43.312605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.198 [2024-07-26 06:27:43.312907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.198 [2024-07-26 06:27:43.313148] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.198 [2024-07-26 06:27:43.313174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.198 [2024-07-26 06:27:43.313191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.198 [2024-07-26 06:27:43.317258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.198 [2024-07-26 06:27:43.326719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.198 [2024-07-26 06:27:43.327154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.198 [2024-07-26 06:27:43.327190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.198 [2024-07-26 06:27:43.327212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.198 [2024-07-26 06:27:43.327487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.198 [2024-07-26 06:27:43.327717] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.198 [2024-07-26 06:27:43.327742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.198 [2024-07-26 06:27:43.327759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.198 [2024-07-26 06:27:43.331198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.198 [2024-07-26 06:27:43.341199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.198 [2024-07-26 06:27:43.341683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.198 [2024-07-26 06:27:43.341723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.198 [2024-07-26 06:27:43.341748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.198 [2024-07-26 06:27:43.342078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.198 [2024-07-26 06:27:43.342358] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.198 [2024-07-26 06:27:43.342385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.198 [2024-07-26 06:27:43.342403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.198 [2024-07-26 06:27:43.346501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.198 [2024-07-26 06:27:43.355720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.198 [2024-07-26 06:27:43.356230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.198 [2024-07-26 06:27:43.356271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.198 [2024-07-26 06:27:43.356297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.199 [2024-07-26 06:27:43.356588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.199 [2024-07-26 06:27:43.356818] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.199 [2024-07-26 06:27:43.356843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.199 [2024-07-26 06:27:43.356860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.199 [2024-07-26 06:27:43.360985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.199 [2024-07-26 06:27:43.370209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.199 [2024-07-26 06:27:43.370906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.199 [2024-07-26 06:27:43.370966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.199 [2024-07-26 06:27:43.370991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.199 [2024-07-26 06:27:43.371292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.199 [2024-07-26 06:27:43.371541] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.199 [2024-07-26 06:27:43.371577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.199 [2024-07-26 06:27:43.371594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.199 [2024-07-26 06:27:43.375676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.199 [2024-07-26 06:27:43.384636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.199 [2024-07-26 06:27:43.385069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.199 [2024-07-26 06:27:43.385124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.199 [2024-07-26 06:27:43.385149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.199 [2024-07-26 06:27:43.385456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.199 [2024-07-26 06:27:43.385687] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.199 [2024-07-26 06:27:43.385711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.199 [2024-07-26 06:27:43.385728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.199 [2024-07-26 06:27:43.389817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.199 [2024-07-26 06:27:43.399242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.199 [2024-07-26 06:27:43.399682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.199 [2024-07-26 06:27:43.399722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.199 [2024-07-26 06:27:43.399753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.199 [2024-07-26 06:27:43.400047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.199 [2024-07-26 06:27:43.400293] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.199 [2024-07-26 06:27:43.400318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.199 [2024-07-26 06:27:43.400336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.199 [2024-07-26 06:27:43.404330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.199 [2024-07-26 06:27:43.413149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.199 [2024-07-26 06:27:43.413586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.199 [2024-07-26 06:27:43.413635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.199 [2024-07-26 06:27:43.413673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.199 [2024-07-26 06:27:43.413945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.199 [2024-07-26 06:27:43.414230] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.199 [2024-07-26 06:27:43.414259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.199 [2024-07-26 06:27:43.414279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.199 [2024-07-26 06:27:43.418041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.199 [2024-07-26 06:27:43.427008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.199 [2024-07-26 06:27:43.427456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.199 [2024-07-26 06:27:43.427506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.199 [2024-07-26 06:27:43.427529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.199 [2024-07-26 06:27:43.427786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.199 [2024-07-26 06:27:43.428024] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.199 [2024-07-26 06:27:43.428051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.199 [2024-07-26 06:27:43.428097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.199 [2024-07-26 06:27:43.431596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.199 [2024-07-26 06:27:43.440702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.199 [2024-07-26 06:27:43.441187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.199 [2024-07-26 06:27:43.441224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.199 [2024-07-26 06:27:43.441248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.199 [2024-07-26 06:27:43.441537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.199 [2024-07-26 06:27:43.441774] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.199 [2024-07-26 06:27:43.441804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.199 [2024-07-26 06:27:43.441823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.199 [2024-07-26 06:27:43.445291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.200 [2024-07-26 06:27:43.454606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.200 [2024-07-26 06:27:43.455158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.200 [2024-07-26 06:27:43.455196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.200 [2024-07-26 06:27:43.455220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.200 [2024-07-26 06:27:43.455502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.200 [2024-07-26 06:27:43.455741] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.200 [2024-07-26 06:27:43.455768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.200 [2024-07-26 06:27:43.455785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.200 [2024-07-26 06:27:43.459331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.200 [2024-07-26 06:27:43.468372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.200 [2024-07-26 06:27:43.468893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.200 [2024-07-26 06:27:43.468937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.200 [2024-07-26 06:27:43.468960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.200 [2024-07-26 06:27:43.469242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.200 [2024-07-26 06:27:43.469542] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.200 [2024-07-26 06:27:43.469573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.200 [2024-07-26 06:27:43.469592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.200 [2024-07-26 06:27:43.473401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.200 [2024-07-26 06:27:43.482156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.200 [2024-07-26 06:27:43.482601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.200 [2024-07-26 06:27:43.482650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.200 [2024-07-26 06:27:43.482673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.200 [2024-07-26 06:27:43.482950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.200 [2024-07-26 06:27:43.483220] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.200 [2024-07-26 06:27:43.483247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.200 [2024-07-26 06:27:43.483266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.200 [2024-07-26 06:27:43.486729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.200 [2024-07-26 06:27:43.495883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.200 [2024-07-26 06:27:43.496385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.200 [2024-07-26 06:27:43.496422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.200 [2024-07-26 06:27:43.496445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.200 [2024-07-26 06:27:43.496746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.200 [2024-07-26 06:27:43.496984] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.200 [2024-07-26 06:27:43.497009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.200 [2024-07-26 06:27:43.497027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.200 [2024-07-26 06:27:43.500527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.200 [2024-07-26 06:27:43.509690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.200 [2024-07-26 06:27:43.510224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.200 [2024-07-26 06:27:43.510262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.200 [2024-07-26 06:27:43.510285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.200 [2024-07-26 06:27:43.510583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.200 [2024-07-26 06:27:43.510821] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.200 [2024-07-26 06:27:43.510846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.200 [2024-07-26 06:27:43.510864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.200 [2024-07-26 06:27:43.514374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.200 [2024-07-26 06:27:43.523691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.200 [2024-07-26 06:27:43.524202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.200 [2024-07-26 06:27:43.524240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.200 [2024-07-26 06:27:43.524264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.200 [2024-07-26 06:27:43.524546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.201 [2024-07-26 06:27:43.524785] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.201 [2024-07-26 06:27:43.524811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.201 [2024-07-26 06:27:43.524829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.201 [2024-07-26 06:27:43.528615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.459 [2024-07-26 06:27:43.537661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.459 [2024-07-26 06:27:43.538102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-07-26 06:27:43.538139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.459 [2024-07-26 06:27:43.538168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.459 [2024-07-26 06:27:43.538428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.459 [2024-07-26 06:27:43.538711] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.459 [2024-07-26 06:27:43.538737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.459 [2024-07-26 06:27:43.538756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.459 [2024-07-26 06:27:43.542261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.459 [2024-07-26 06:27:43.551449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.459 [2024-07-26 06:27:43.551919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-07-26 06:27:43.551956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.459 [2024-07-26 06:27:43.551979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.459 [2024-07-26 06:27:43.552275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.459 [2024-07-26 06:27:43.552533] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.459 [2024-07-26 06:27:43.552559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.459 [2024-07-26 06:27:43.552576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.459 [2024-07-26 06:27:43.556036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.459 [2024-07-26 06:27:43.565206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.459 [2024-07-26 06:27:43.565780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.459 [2024-07-26 06:27:43.565816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.459 [2024-07-26 06:27:43.565839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.459 [2024-07-26 06:27:43.566130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.459 [2024-07-26 06:27:43.566390] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.460 [2024-07-26 06:27:43.566416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.460 [2024-07-26 06:27:43.566434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.460 [2024-07-26 06:27:43.569908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.460 [2024-07-26 06:27:43.579107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.460 [2024-07-26 06:27:43.579673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-07-26 06:27:43.579709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.460 [2024-07-26 06:27:43.579732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.460 [2024-07-26 06:27:43.580026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.460 [2024-07-26 06:27:43.580302] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.460 [2024-07-26 06:27:43.580329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.460 [2024-07-26 06:27:43.580362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.460 [2024-07-26 06:27:43.583841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.460 [2024-07-26 06:27:43.592880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.460 [2024-07-26 06:27:43.593343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-07-26 06:27:43.593393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.460 [2024-07-26 06:27:43.593417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.460 [2024-07-26 06:27:43.593698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.460 [2024-07-26 06:27:43.593938] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.460 [2024-07-26 06:27:43.593964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.460 [2024-07-26 06:27:43.593982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.460 [2024-07-26 06:27:43.597545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.460 [2024-07-26 06:27:43.606679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.460 [2024-07-26 06:27:43.607179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-07-26 06:27:43.607215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.460 [2024-07-26 06:27:43.607238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.460 [2024-07-26 06:27:43.607522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.460 [2024-07-26 06:27:43.607763] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.460 [2024-07-26 06:27:43.607788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.460 [2024-07-26 06:27:43.607806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.460 [2024-07-26 06:27:43.611297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.460 [2024-07-26 06:27:43.620554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.460 [2024-07-26 06:27:43.620980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-07-26 06:27:43.621016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.460 [2024-07-26 06:27:43.621052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.460 [2024-07-26 06:27:43.621353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.460 [2024-07-26 06:27:43.621610] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.460 [2024-07-26 06:27:43.621636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.460 [2024-07-26 06:27:43.621654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.460 [2024-07-26 06:27:43.625172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.460 [2024-07-26 06:27:43.634384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.460 [2024-07-26 06:27:43.634845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-07-26 06:27:43.634882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.460 [2024-07-26 06:27:43.634905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.460 [2024-07-26 06:27:43.635201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.460 [2024-07-26 06:27:43.635461] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.460 [2024-07-26 06:27:43.635486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.460 [2024-07-26 06:27:43.635504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.460 [2024-07-26 06:27:43.638955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.460 [2024-07-26 06:27:43.648132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.460 [2024-07-26 06:27:43.648612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-07-26 06:27:43.648649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.460 [2024-07-26 06:27:43.648672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.460 [2024-07-26 06:27:43.648972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.460 [2024-07-26 06:27:43.649242] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.460 [2024-07-26 06:27:43.649269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.460 [2024-07-26 06:27:43.649288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.460 [2024-07-26 06:27:43.652757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.460 [2024-07-26 06:27:43.661903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.460 [2024-07-26 06:27:43.662470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-07-26 06:27:43.662506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.460 [2024-07-26 06:27:43.662529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.460 [2024-07-26 06:27:43.662823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.460 [2024-07-26 06:27:43.663090] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.460 [2024-07-26 06:27:43.663134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.460 [2024-07-26 06:27:43.663154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.460 [2024-07-26 06:27:43.666656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.460 [2024-07-26 06:27:43.675680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.460 [2024-07-26 06:27:43.676140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-07-26 06:27:43.676177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.460 [2024-07-26 06:27:43.676205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.460 [2024-07-26 06:27:43.676488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.460 [2024-07-26 06:27:43.676728] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.460 [2024-07-26 06:27:43.676754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.460 [2024-07-26 06:27:43.676771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.460 [2024-07-26 06:27:43.680268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.460 [2024-07-26 06:27:43.689471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.460 [2024-07-26 06:27:43.690065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-07-26 06:27:43.690101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.460 [2024-07-26 06:27:43.690124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.460 [2024-07-26 06:27:43.690412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.460 [2024-07-26 06:27:43.690651] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.460 [2024-07-26 06:27:43.690677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.460 [2024-07-26 06:27:43.690694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.460 [2024-07-26 06:27:43.694204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.460 [2024-07-26 06:27:43.703418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.460 [2024-07-26 06:27:43.703884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-07-26 06:27:43.703921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.460 [2024-07-26 06:27:43.703943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.460 [2024-07-26 06:27:43.704242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.460 [2024-07-26 06:27:43.704500] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.460 [2024-07-26 06:27:43.704526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.460 [2024-07-26 06:27:43.704544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.460 [2024-07-26 06:27:43.707995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.460 [2024-07-26 06:27:43.717178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.460 [2024-07-26 06:27:43.717676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-07-26 06:27:43.717725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.460 [2024-07-26 06:27:43.717748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.460 [2024-07-26 06:27:43.718029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.460 [2024-07-26 06:27:43.718304] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.460 [2024-07-26 06:27:43.718331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.460 [2024-07-26 06:27:43.718350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.460 [2024-07-26 06:27:43.721813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.460 [2024-07-26 06:27:43.730931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.460 [2024-07-26 06:27:43.731398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-07-26 06:27:43.731449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.460 [2024-07-26 06:27:43.731472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.460 [2024-07-26 06:27:43.731757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.460 [2024-07-26 06:27:43.731996] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.460 [2024-07-26 06:27:43.732021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.460 [2024-07-26 06:27:43.732054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.460 [2024-07-26 06:27:43.735546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.460 [2024-07-26 06:27:43.744678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.460 [2024-07-26 06:27:43.745104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.460 [2024-07-26 06:27:43.745154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.460 [2024-07-26 06:27:43.745176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.461 [2024-07-26 06:27:43.745492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.461 [2024-07-26 06:27:43.745732] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.461 [2024-07-26 06:27:43.745757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.461 [2024-07-26 06:27:43.745775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.461 [2024-07-26 06:27:43.749261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.461 [2024-07-26 06:27:43.758437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.461 [2024-07-26 06:27:43.758954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-07-26 06:27:43.759005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.461 [2024-07-26 06:27:43.759028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.461 [2024-07-26 06:27:43.759338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.461 [2024-07-26 06:27:43.759612] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.461 [2024-07-26 06:27:43.759638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.461 [2024-07-26 06:27:43.759656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.461 [2024-07-26 06:27:43.763138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.461 [2024-07-26 06:27:43.772193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.461 [2024-07-26 06:27:43.772631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-07-26 06:27:43.772680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.461 [2024-07-26 06:27:43.772703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.461 [2024-07-26 06:27:43.772982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.461 [2024-07-26 06:27:43.773257] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.461 [2024-07-26 06:27:43.773284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.461 [2024-07-26 06:27:43.773303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.461 [2024-07-26 06:27:43.776763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.461 [2024-07-26 06:27:43.785913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.461 [2024-07-26 06:27:43.786424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.461 [2024-07-26 06:27:43.786460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.461 [2024-07-26 06:27:43.786483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.461 [2024-07-26 06:27:43.786782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.461 [2024-07-26 06:27:43.787019] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.461 [2024-07-26 06:27:43.787069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.461 [2024-07-26 06:27:43.787091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.461 [2024-07-26 06:27:43.790769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.720 [2024-07-26 06:27:43.799993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.720 [2024-07-26 06:27:43.800525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.720 [2024-07-26 06:27:43.800577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.720 [2024-07-26 06:27:43.800601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.720 [2024-07-26 06:27:43.800882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.720 [2024-07-26 06:27:43.801152] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.720 [2024-07-26 06:27:43.801179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.720 [2024-07-26 06:27:43.801198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.720 [2024-07-26 06:27:43.804732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.720 [2024-07-26 06:27:43.813749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.720 [2024-07-26 06:27:43.814254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.720 [2024-07-26 06:27:43.814291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.720 [2024-07-26 06:27:43.814319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.720 [2024-07-26 06:27:43.814619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.720 [2024-07-26 06:27:43.814860] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.720 [2024-07-26 06:27:43.814886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.720 [2024-07-26 06:27:43.814903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.720 [2024-07-26 06:27:43.818441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.720 [2024-07-26 06:27:43.827647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.720 [2024-07-26 06:27:43.828107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.720 [2024-07-26 06:27:43.828144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.720 [2024-07-26 06:27:43.828167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.720 [2024-07-26 06:27:43.828456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.720 [2024-07-26 06:27:43.828697] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.720 [2024-07-26 06:27:43.828723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.720 [2024-07-26 06:27:43.828740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.720 [2024-07-26 06:27:43.832240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.720 [2024-07-26 06:27:43.841487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.720 [2024-07-26 06:27:43.842055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.720 [2024-07-26 06:27:43.842097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.720 [2024-07-26 06:27:43.842121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.720 [2024-07-26 06:27:43.842420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.720 [2024-07-26 06:27:43.842659] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.720 [2024-07-26 06:27:43.842684] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.720 [2024-07-26 06:27:43.842703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.720 [2024-07-26 06:27:43.846199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.720 [2024-07-26 06:27:43.855375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.720 [2024-07-26 06:27:43.855904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.720 [2024-07-26 06:27:43.855954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.720 [2024-07-26 06:27:43.855978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.720 [2024-07-26 06:27:43.856304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.720 [2024-07-26 06:27:43.856564] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.720 [2024-07-26 06:27:43.856590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.720 [2024-07-26 06:27:43.856608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.720 [2024-07-26 06:27:43.860097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.720 [2024-07-26 06:27:43.869214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.720 [2024-07-26 06:27:43.869668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.720 [2024-07-26 06:27:43.869719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.720 [2024-07-26 06:27:43.869742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.720 [2024-07-26 06:27:43.870043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.720 [2024-07-26 06:27:43.870313] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.720 [2024-07-26 06:27:43.870339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.720 [2024-07-26 06:27:43.870357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.720 [2024-07-26 06:27:43.873823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.720 [2024-07-26 06:27:43.883017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.720 [2024-07-26 06:27:43.883522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.720 [2024-07-26 06:27:43.883572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.720 [2024-07-26 06:27:43.883596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.720 [2024-07-26 06:27:43.883887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.720 [2024-07-26 06:27:43.884156] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.720 [2024-07-26 06:27:43.884183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.720 [2024-07-26 06:27:43.884201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.720 [2024-07-26 06:27:43.887681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.720 [2024-07-26 06:27:43.896863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.720 [2024-07-26 06:27:43.897305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.720 [2024-07-26 06:27:43.897342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.720 [2024-07-26 06:27:43.897364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.720 [2024-07-26 06:27:43.897648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.720 [2024-07-26 06:27:43.897887] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.720 [2024-07-26 06:27:43.897913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.720 [2024-07-26 06:27:43.897930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.720 [2024-07-26 06:27:43.901473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.720 [2024-07-26 06:27:43.910709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.720 [2024-07-26 06:27:43.911231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.720 [2024-07-26 06:27:43.911281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.720 [2024-07-26 06:27:43.911305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.720 [2024-07-26 06:27:43.911598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.720 [2024-07-26 06:27:43.911836] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.720 [2024-07-26 06:27:43.911861] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.720 [2024-07-26 06:27:43.911879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.720 [2024-07-26 06:27:43.915393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.720 [2024-07-26 06:27:43.924733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.720 [2024-07-26 06:27:43.925201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.720 [2024-07-26 06:27:43.925238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.720 [2024-07-26 06:27:43.925261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.720 [2024-07-26 06:27:43.925520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.720 [2024-07-26 06:27:43.925798] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.720 [2024-07-26 06:27:43.925826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.720 [2024-07-26 06:27:43.925844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.720 [2024-07-26 06:27:43.929648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.720 [2024-07-26 06:27:43.938474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.720 [2024-07-26 06:27:43.939071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.720 [2024-07-26 06:27:43.939107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.720 [2024-07-26 06:27:43.939130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.720 [2024-07-26 06:27:43.939415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.720 [2024-07-26 06:27:43.939654] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.720 [2024-07-26 06:27:43.939679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.720 [2024-07-26 06:27:43.939696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.720 [2024-07-26 06:27:43.943171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.720 [2024-07-26 06:27:43.952321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.720 [2024-07-26 06:27:43.952862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.720 [2024-07-26 06:27:43.952906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.720 [2024-07-26 06:27:43.952930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.720 [2024-07-26 06:27:43.953228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.720 [2024-07-26 06:27:43.953488] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.720 [2024-07-26 06:27:43.953514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.720 [2024-07-26 06:27:43.953532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.720 [2024-07-26 06:27:43.956996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.720 [2024-07-26 06:27:43.966204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.720 [2024-07-26 06:27:43.966635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.720 [2024-07-26 06:27:43.966684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.721 [2024-07-26 06:27:43.966721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.721 [2024-07-26 06:27:43.967006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.721 [2024-07-26 06:27:43.967292] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.721 [2024-07-26 06:27:43.967321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.721 [2024-07-26 06:27:43.967341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.721 [2024-07-26 06:27:43.970820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.721 [2024-07-26 06:27:43.979982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.721 [2024-07-26 06:27:43.980527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.721 [2024-07-26 06:27:43.980578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.721 [2024-07-26 06:27:43.980602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.721 [2024-07-26 06:27:43.980874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.721 [2024-07-26 06:27:43.981143] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.721 [2024-07-26 06:27:43.981170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.721 [2024-07-26 06:27:43.981188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.721 [2024-07-26 06:27:43.984535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.721 [2024-07-26 06:27:43.993877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.721 [2024-07-26 06:27:43.994398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.721 [2024-07-26 06:27:43.994449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.721 [2024-07-26 06:27:43.994473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.721 [2024-07-26 06:27:43.994769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.721 [2024-07-26 06:27:43.995012] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.721 [2024-07-26 06:27:43.995038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.721 [2024-07-26 06:27:43.995083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.721 [2024-07-26 06:27:43.998582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.721 [2024-07-26 06:27:44.007797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.721 [2024-07-26 06:27:44.008253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.721 [2024-07-26 06:27:44.008289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.721 [2024-07-26 06:27:44.008312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.721 [2024-07-26 06:27:44.008593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.721 [2024-07-26 06:27:44.008832] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.721 [2024-07-26 06:27:44.008858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.721 [2024-07-26 06:27:44.008875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.721 [2024-07-26 06:27:44.012422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.721 [2024-07-26 06:27:44.021638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.721 [2024-07-26 06:27:44.022106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.721 [2024-07-26 06:27:44.022143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.721 [2024-07-26 06:27:44.022165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.721 [2024-07-26 06:27:44.022454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.721 [2024-07-26 06:27:44.022696] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.721 [2024-07-26 06:27:44.022721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.721 [2024-07-26 06:27:44.022739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.721 [2024-07-26 06:27:44.026259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.721 [2024-07-26 06:27:44.035517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.721 [2024-07-26 06:27:44.036016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.721 [2024-07-26 06:27:44.036076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.721 [2024-07-26 06:27:44.036102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.721 [2024-07-26 06:27:44.036405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.721 [2024-07-26 06:27:44.036647] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.721 [2024-07-26 06:27:44.036672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.721 [2024-07-26 06:27:44.036690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.721 [2024-07-26 06:27:44.040214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.721 [2024-07-26 06:27:44.049744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.721 [2024-07-26 06:27:44.050192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.721 [2024-07-26 06:27:44.050229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.721 [2024-07-26 06:27:44.050252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.721 [2024-07-26 06:27:44.050529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.721 [2024-07-26 06:27:44.050808] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.721 [2024-07-26 06:27:44.050836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.721 [2024-07-26 06:27:44.050856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.980 [2024-07-26 06:27:44.054745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.980 [2024-07-26 06:27:44.063507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.980 [2024-07-26 06:27:44.063967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.980 [2024-07-26 06:27:44.064018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.980 [2024-07-26 06:27:44.064042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.980 [2024-07-26 06:27:44.064338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.980 [2024-07-26 06:27:44.064593] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.980 [2024-07-26 06:27:44.064619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.980 [2024-07-26 06:27:44.064636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.980 [2024-07-26 06:27:44.068122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.980 [2024-07-26 06:27:44.077316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.980 [2024-07-26 06:27:44.077710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.980 [2024-07-26 06:27:44.077760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.980 [2024-07-26 06:27:44.077782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.980 [2024-07-26 06:27:44.078094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.980 [2024-07-26 06:27:44.078372] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.980 [2024-07-26 06:27:44.078398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.980 [2024-07-26 06:27:44.078415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.980 [2024-07-26 06:27:44.081886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.980 [2024-07-26 06:27:44.091170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.980 [2024-07-26 06:27:44.091624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.980 [2024-07-26 06:27:44.091681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.980 [2024-07-26 06:27:44.091705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.980 [2024-07-26 06:27:44.092003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.980 [2024-07-26 06:27:44.092271] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.980 [2024-07-26 06:27:44.092298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.980 [2024-07-26 06:27:44.092317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.980 [2024-07-26 06:27:44.095793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.980 [2024-07-26 06:27:44.105021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.980 [2024-07-26 06:27:44.105548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.980 [2024-07-26 06:27:44.105584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.981 [2024-07-26 06:27:44.105607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.981 [2024-07-26 06:27:44.105906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.981 [2024-07-26 06:27:44.106177] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.981 [2024-07-26 06:27:44.106204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.981 [2024-07-26 06:27:44.106222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.981 [2024-07-26 06:27:44.109700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.981 [2024-07-26 06:27:44.118855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.981 [2024-07-26 06:27:44.119335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.981 [2024-07-26 06:27:44.119372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.981 [2024-07-26 06:27:44.119395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.981 [2024-07-26 06:27:44.119691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.981 [2024-07-26 06:27:44.119930] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.981 [2024-07-26 06:27:44.119956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.981 [2024-07-26 06:27:44.119973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.981 [2024-07-26 06:27:44.123482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.981 [2024-07-26 06:27:44.132739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.981 [2024-07-26 06:27:44.133202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.981 [2024-07-26 06:27:44.133239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.981 [2024-07-26 06:27:44.133262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.981 [2024-07-26 06:27:44.133559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.981 [2024-07-26 06:27:44.133802] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.981 [2024-07-26 06:27:44.133829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.981 [2024-07-26 06:27:44.133847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.981 [2024-07-26 06:27:44.137333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.981 [2024-07-26 06:27:44.146509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.981 [2024-07-26 06:27:44.146975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.981 [2024-07-26 06:27:44.147012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.981 [2024-07-26 06:27:44.147035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.981 [2024-07-26 06:27:44.147331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.981 [2024-07-26 06:27:44.147587] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.981 [2024-07-26 06:27:44.147613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.981 [2024-07-26 06:27:44.147632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.981 [2024-07-26 06:27:44.151036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.981 [2024-07-26 06:27:44.160299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.981 [2024-07-26 06:27:44.160726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.981 [2024-07-26 06:27:44.160777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.981 [2024-07-26 06:27:44.160799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.981 [2024-07-26 06:27:44.161107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.981 [2024-07-26 06:27:44.161396] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.981 [2024-07-26 06:27:44.161422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.981 [2024-07-26 06:27:44.161440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.981 [2024-07-26 06:27:44.164894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.981 [2024-07-26 06:27:44.174037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.981 [2024-07-26 06:27:44.174556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.981 [2024-07-26 06:27:44.174593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.981 [2024-07-26 06:27:44.174616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.981 [2024-07-26 06:27:44.174897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.981 [2024-07-26 06:27:44.175188] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.981 [2024-07-26 06:27:44.175216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.981 [2024-07-26 06:27:44.175240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.981 [2024-07-26 06:27:44.179066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.981 [2024-07-26 06:27:44.187909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.981 [2024-07-26 06:27:44.188362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.981 [2024-07-26 06:27:44.188413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.981 [2024-07-26 06:27:44.188436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.981 [2024-07-26 06:27:44.188723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.981 [2024-07-26 06:27:44.188962] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.981 [2024-07-26 06:27:44.188988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.981 [2024-07-26 06:27:44.189005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.981 [2024-07-26 06:27:44.192535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.981 [2024-07-26 06:27:44.201707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.981 [2024-07-26 06:27:44.202181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.981 [2024-07-26 06:27:44.202221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.981 [2024-07-26 06:27:44.202243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.981 [2024-07-26 06:27:44.202528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.981 [2024-07-26 06:27:44.202789] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.981 [2024-07-26 06:27:44.202815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.981 [2024-07-26 06:27:44.202833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.981 [2024-07-26 06:27:44.206411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.981 [2024-07-26 06:27:44.215568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.981 [2024-07-26 06:27:44.216035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.981 [2024-07-26 06:27:44.216077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.981 [2024-07-26 06:27:44.216101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.981 [2024-07-26 06:27:44.216410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.981 [2024-07-26 06:27:44.216648] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.981 [2024-07-26 06:27:44.216673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.981 [2024-07-26 06:27:44.216691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.981 [2024-07-26 06:27:44.220172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.981 [2024-07-26 06:27:44.229403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.981 [2024-07-26 06:27:44.229855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.981 [2024-07-26 06:27:44.229895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.981 [2024-07-26 06:27:44.229934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.981 [2024-07-26 06:27:44.230251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.981 [2024-07-26 06:27:44.230532] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.981 [2024-07-26 06:27:44.230557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.981 [2024-07-26 06:27:44.230575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.981 [2024-07-26 06:27:44.234025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.981 [2024-07-26 06:27:44.243281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.981 [2024-07-26 06:27:44.243776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.981 [2024-07-26 06:27:44.243826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.982 [2024-07-26 06:27:44.243850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.982 [2024-07-26 06:27:44.244148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.982 [2024-07-26 06:27:44.244412] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.982 [2024-07-26 06:27:44.244438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.982 [2024-07-26 06:27:44.244456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.982 [2024-07-26 06:27:44.247948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.982 [2024-07-26 06:27:44.257006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.982 [2024-07-26 06:27:44.257517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.982 [2024-07-26 06:27:44.257568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.982 [2024-07-26 06:27:44.257592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.982 [2024-07-26 06:27:44.257890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.982 [2024-07-26 06:27:44.258161] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.982 [2024-07-26 06:27:44.258189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.982 [2024-07-26 06:27:44.258207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.982 [2024-07-26 06:27:44.261698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.982 [2024-07-26 06:27:44.270869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.982 [2024-07-26 06:27:44.271408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.982 [2024-07-26 06:27:44.271444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.982 [2024-07-26 06:27:44.271467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.982 [2024-07-26 06:27:44.271779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.982 [2024-07-26 06:27:44.272019] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.982 [2024-07-26 06:27:44.272044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.982 [2024-07-26 06:27:44.272088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.982 [2024-07-26 06:27:44.275585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.982 [2024-07-26 06:27:44.284761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.982 [2024-07-26 06:27:44.285245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.982 [2024-07-26 06:27:44.285281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.982 [2024-07-26 06:27:44.285304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.982 [2024-07-26 06:27:44.285590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.982 [2024-07-26 06:27:44.285829] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.982 [2024-07-26 06:27:44.285854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.982 [2024-07-26 06:27:44.285872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.982 [2024-07-26 06:27:44.289384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.982 [2024-07-26 06:27:44.298571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.982 [2024-07-26 06:27:44.299067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.982 [2024-07-26 06:27:44.299119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:32.982 [2024-07-26 06:27:44.299143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:32.982 [2024-07-26 06:27:44.299443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:32.982 [2024-07-26 06:27:44.299681] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.982 [2024-07-26 06:27:44.299707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.982 [2024-07-26 06:27:44.299724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.982 [2024-07-26 06:27:44.303241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.982 [2024-07-26 06:27:44.312879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.982 [2024-07-26 06:27:44.313330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.241 [2024-07-26 06:27:44.313368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.241 [2024-07-26 06:27:44.313391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.241 [2024-07-26 06:27:44.313650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.241 [2024-07-26 06:27:44.313928] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.241 [2024-07-26 06:27:44.313955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.241 [2024-07-26 06:27:44.313979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.241 [2024-07-26 06:27:44.317643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.241 [2024-07-26 06:27:44.326708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.241 [2024-07-26 06:27:44.327305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.241 [2024-07-26 06:27:44.327352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.241 [2024-07-26 06:27:44.327374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.241 [2024-07-26 06:27:44.327657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.241 [2024-07-26 06:27:44.327897] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.241 [2024-07-26 06:27:44.327922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.241 [2024-07-26 06:27:44.327940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.241 [2024-07-26 06:27:44.331525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.241 [2024-07-26 06:27:44.340553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.241 [2024-07-26 06:27:44.341139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.241 [2024-07-26 06:27:44.341175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.241 [2024-07-26 06:27:44.341198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.241 [2024-07-26 06:27:44.341486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.241 [2024-07-26 06:27:44.341724] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.241 [2024-07-26 06:27:44.341750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.241 [2024-07-26 06:27:44.341767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.241 [2024-07-26 06:27:44.345268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.241 [2024-07-26 06:27:44.354278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.241 [2024-07-26 06:27:44.354992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.241 [2024-07-26 06:27:44.355057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.241 [2024-07-26 06:27:44.355094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.241 [2024-07-26 06:27:44.355373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.241 [2024-07-26 06:27:44.355630] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.241 [2024-07-26 06:27:44.355656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.241 [2024-07-26 06:27:44.355693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.241 [2024-07-26 06:27:44.359230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.241 [2024-07-26 06:27:44.368034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.241 [2024-07-26 06:27:44.368640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.241 [2024-07-26 06:27:44.368678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.241 [2024-07-26 06:27:44.368701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.241 [2024-07-26 06:27:44.368999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.241 [2024-07-26 06:27:44.369275] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.241 [2024-07-26 06:27:44.369313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.241 [2024-07-26 06:27:44.369332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.241 [2024-07-26 06:27:44.372802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.241 [2024-07-26 06:27:44.381943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.241 [2024-07-26 06:27:44.382431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.241 [2024-07-26 06:27:44.382469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.241 [2024-07-26 06:27:44.382492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.241 [2024-07-26 06:27:44.382789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.241 [2024-07-26 06:27:44.383028] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.241 [2024-07-26 06:27:44.383078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.241 [2024-07-26 06:27:44.383099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.241 [2024-07-26 06:27:44.386583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.241 [2024-07-26 06:27:44.395799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.251 [2024-07-26 06:27:44.396289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.251 [2024-07-26 06:27:44.396327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.251 [2024-07-26 06:27:44.396365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.251 [2024-07-26 06:27:44.396643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.251 [2024-07-26 06:27:44.396882] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.251 [2024-07-26 06:27:44.396908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.251 [2024-07-26 06:27:44.396926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.251 [2024-07-26 06:27:44.400422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.251 [2024-07-26 06:27:44.409649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.251 [2024-07-26 06:27:44.410056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.251 [2024-07-26 06:27:44.410114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.251 [2024-07-26 06:27:44.410138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.251 [2024-07-26 06:27:44.410432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.251 [2024-07-26 06:27:44.410671] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.251 [2024-07-26 06:27:44.410696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.251 [2024-07-26 06:27:44.410713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.251 [2024-07-26 06:27:44.414173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.251 [2024-07-26 06:27:44.423545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.251 [2024-07-26 06:27:44.424100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.251 [2024-07-26 06:27:44.424138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.251 [2024-07-26 06:27:44.424161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.251 [2024-07-26 06:27:44.424460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.251 [2024-07-26 06:27:44.424698] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.251 [2024-07-26 06:27:44.424724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.251 [2024-07-26 06:27:44.424742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.251 [2024-07-26 06:27:44.428249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.251 [2024-07-26 06:27:44.437390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.251 [2024-07-26 06:27:44.437887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.251 [2024-07-26 06:27:44.437939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.251 [2024-07-26 06:27:44.437963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.251 [2024-07-26 06:27:44.438264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.251 [2024-07-26 06:27:44.438523] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.251 [2024-07-26 06:27:44.438548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.251 [2024-07-26 06:27:44.438566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.251 [2024-07-26 06:27:44.442031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.251 [2024-07-26 06:27:44.451225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.251 [2024-07-26 06:27:44.451654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.251 [2024-07-26 06:27:44.451706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.251 [2024-07-26 06:27:44.451729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.251 [2024-07-26 06:27:44.452017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.251 [2024-07-26 06:27:44.452290] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.251 [2024-07-26 06:27:44.452317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.251 [2024-07-26 06:27:44.452359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.251 [2024-07-26 06:27:44.455841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.251 [2024-07-26 06:27:44.464953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.251 [2024-07-26 06:27:44.465470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.251 [2024-07-26 06:27:44.465522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.251 [2024-07-26 06:27:44.465546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.251 [2024-07-26 06:27:44.465835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.251 [2024-07-26 06:27:44.466108] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.251 [2024-07-26 06:27:44.466136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.251 [2024-07-26 06:27:44.466154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.251 [2024-07-26 06:27:44.469672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.251 [2024-07-26 06:27:44.478755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.251 [2024-07-26 06:27:44.479251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.251 [2024-07-26 06:27:44.479288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.251 [2024-07-26 06:27:44.479311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.251 [2024-07-26 06:27:44.479601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.251 [2024-07-26 06:27:44.479840] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.251 [2024-07-26 06:27:44.479866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.251 [2024-07-26 06:27:44.479884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.251 [2024-07-26 06:27:44.483418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.251 [2024-07-26 06:27:44.492571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.251 [2024-07-26 06:27:44.493118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.251 [2024-07-26 06:27:44.493170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.251 [2024-07-26 06:27:44.493193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.251 [2024-07-26 06:27:44.493470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.251 [2024-07-26 06:27:44.493709] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.251 [2024-07-26 06:27:44.493735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.252 [2024-07-26 06:27:44.493752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.252 [2024-07-26 06:27:44.497244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.252 [2024-07-26 06:27:44.506461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.252 [2024-07-26 06:27:44.506916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.252 [2024-07-26 06:27:44.506967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.252 [2024-07-26 06:27:44.506991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.252 [2024-07-26 06:27:44.507268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.252 [2024-07-26 06:27:44.507528] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.252 [2024-07-26 06:27:44.507554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.252 [2024-07-26 06:27:44.507572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.252 [2024-07-26 06:27:44.511022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.252 [2024-07-26 06:27:44.520146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.252 [2024-07-26 06:27:44.520622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.252 [2024-07-26 06:27:44.520672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.252 [2024-07-26 06:27:44.520696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.252 [2024-07-26 06:27:44.520994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.252 [2024-07-26 06:27:44.521264] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.252 [2024-07-26 06:27:44.521291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.252 [2024-07-26 06:27:44.521310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.252 [2024-07-26 06:27:44.524850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.252 [2024-07-26 06:27:44.533952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.252 [2024-07-26 06:27:44.534515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.252 [2024-07-26 06:27:44.534565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.252 [2024-07-26 06:27:44.534589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.252 [2024-07-26 06:27:44.534884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.252 [2024-07-26 06:27:44.535153] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.252 [2024-07-26 06:27:44.535180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.252 [2024-07-26 06:27:44.535199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.252 [2024-07-26 06:27:44.538667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.252 [2024-07-26 06:27:44.547790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.252 [2024-07-26 06:27:44.548256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.252 [2024-07-26 06:27:44.548293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.252 [2024-07-26 06:27:44.548315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.252 [2024-07-26 06:27:44.548605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.252 [2024-07-26 06:27:44.548843] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.252 [2024-07-26 06:27:44.548869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.252 [2024-07-26 06:27:44.548887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.252 [2024-07-26 06:27:44.552383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.252 [2024-07-26 06:27:44.561540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.252 [2024-07-26 06:27:44.562075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.252 [2024-07-26 06:27:44.562112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.252 [2024-07-26 06:27:44.562134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.252 [2024-07-26 06:27:44.562419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.252 [2024-07-26 06:27:44.562656] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.252 [2024-07-26 06:27:44.562682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.252 [2024-07-26 06:27:44.562699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.252 [2024-07-26 06:27:44.566183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.511 [2024-07-26 06:27:44.575918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.511 [2024-07-26 06:27:44.576497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-07-26 06:27:44.576533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.511 [2024-07-26 06:27:44.576555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.511 [2024-07-26 06:27:44.576848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.511 [2024-07-26 06:27:44.577115] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.511 [2024-07-26 06:27:44.577142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.511 [2024-07-26 06:27:44.577160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.511 [2024-07-26 06:27:44.580635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.511 [2024-07-26 06:27:44.590135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.511 [2024-07-26 06:27:44.590650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-07-26 06:27:44.590708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.511 [2024-07-26 06:27:44.590732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.511 [2024-07-26 06:27:44.591052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.511 [2024-07-26 06:27:44.591317] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.511 [2024-07-26 06:27:44.591361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.511 [2024-07-26 06:27:44.591385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.511 [2024-07-26 06:27:44.594888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.511 [2024-07-26 06:27:44.603902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.511 [2024-07-26 06:27:44.604334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-07-26 06:27:44.604370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.511 [2024-07-26 06:27:44.604392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.511 [2024-07-26 06:27:44.604653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.511 [2024-07-26 06:27:44.604891] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.511 [2024-07-26 06:27:44.604917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.511 [2024-07-26 06:27:44.604934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.511 [2024-07-26 06:27:44.608479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.511 [2024-07-26 06:27:44.617681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.511 [2024-07-26 06:27:44.618208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-07-26 06:27:44.618260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.511 [2024-07-26 06:27:44.618284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.511 [2024-07-26 06:27:44.618567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.511 [2024-07-26 06:27:44.618808] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.511 [2024-07-26 06:27:44.618834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.511 [2024-07-26 06:27:44.618852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.511 [2024-07-26 06:27:44.622539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.511 [2024-07-26 06:27:44.631601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.511 [2024-07-26 06:27:44.632068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-07-26 06:27:44.632105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.511 [2024-07-26 06:27:44.632128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.511 [2024-07-26 06:27:44.632428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.511 [2024-07-26 06:27:44.632666] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.511 [2024-07-26 06:27:44.632692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.511 [2024-07-26 06:27:44.632709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.511 [2024-07-26 06:27:44.636183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.511 [2024-07-26 06:27:44.645322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.511 [2024-07-26 06:27:44.645719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-07-26 06:27:44.645755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.511 [2024-07-26 06:27:44.645777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.511 [2024-07-26 06:27:44.646055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.511 [2024-07-26 06:27:44.646324] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.511 [2024-07-26 06:27:44.646351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.511 [2024-07-26 06:27:44.646384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.511 [2024-07-26 06:27:44.649832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.511 [2024-07-26 06:27:44.658975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.511 [2024-07-26 06:27:44.659540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-07-26 06:27:44.659576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.511 [2024-07-26 06:27:44.659599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.511 [2024-07-26 06:27:44.659895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.511 [2024-07-26 06:27:44.660162] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.511 [2024-07-26 06:27:44.660189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.511 [2024-07-26 06:27:44.660208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.511 [2024-07-26 06:27:44.663680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.511 [2024-07-26 06:27:44.672800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.511 [2024-07-26 06:27:44.673257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-07-26 06:27:44.673294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.511 [2024-07-26 06:27:44.673317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.511 [2024-07-26 06:27:44.673599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.511 [2024-07-26 06:27:44.673838] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.511 [2024-07-26 06:27:44.673864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.512 [2024-07-26 06:27:44.673881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.512 [2024-07-26 06:27:44.677432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.512 [2024-07-26 06:27:44.686625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.512 [2024-07-26 06:27:44.687117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-07-26 06:27:44.687159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.512 [2024-07-26 06:27:44.687185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.512 [2024-07-26 06:27:44.687480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.512 [2024-07-26 06:27:44.687772] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.512 [2024-07-26 06:27:44.687804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.512 [2024-07-26 06:27:44.687825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.512 [2024-07-26 06:27:44.692005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.512 [2024-07-26 06:27:44.701106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.512 [2024-07-26 06:27:44.701606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-07-26 06:27:44.701646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.512 [2024-07-26 06:27:44.701671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.512 [2024-07-26 06:27:44.701958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.512 [2024-07-26 06:27:44.702259] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.512 [2024-07-26 06:27:44.702291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.512 [2024-07-26 06:27:44.702314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.512 [2024-07-26 06:27:44.706474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.512 [2024-07-26 06:27:44.715683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.512 [2024-07-26 06:27:44.716172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-07-26 06:27:44.716222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.512 [2024-07-26 06:27:44.716246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.512 [2024-07-26 06:27:44.716549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.512 [2024-07-26 06:27:44.716841] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.512 [2024-07-26 06:27:44.716872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.512 [2024-07-26 06:27:44.716894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.512 [2024-07-26 06:27:44.721028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.512 [2024-07-26 06:27:44.730209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.512 [2024-07-26 06:27:44.730722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-07-26 06:27:44.730756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.512 [2024-07-26 06:27:44.730793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.512 [2024-07-26 06:27:44.731091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.512 [2024-07-26 06:27:44.731375] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.512 [2024-07-26 06:27:44.731401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.512 [2024-07-26 06:27:44.731444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.512 [2024-07-26 06:27:44.735593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.512 [2024-07-26 06:27:44.744916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.512 [2024-07-26 06:27:44.745420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-07-26 06:27:44.745461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.512 [2024-07-26 06:27:44.745487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.512 [2024-07-26 06:27:44.745775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.512 [2024-07-26 06:27:44.746075] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.512 [2024-07-26 06:27:44.746118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.512 [2024-07-26 06:27:44.746139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.512 [2024-07-26 06:27:44.750307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.512 [2024-07-26 06:27:44.759589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.512 [2024-07-26 06:27:44.760084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-07-26 06:27:44.760147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.512 [2024-07-26 06:27:44.760171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.512 [2024-07-26 06:27:44.760484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.512 [2024-07-26 06:27:44.760777] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.512 [2024-07-26 06:27:44.760807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.512 [2024-07-26 06:27:44.760830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.512 [2024-07-26 06:27:44.765016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.512 [2024-07-26 06:27:44.774045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.512 [2024-07-26 06:27:44.774537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-07-26 06:27:44.774577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.512 [2024-07-26 06:27:44.774603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.512 [2024-07-26 06:27:44.774891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.512 [2024-07-26 06:27:44.775194] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.512 [2024-07-26 06:27:44.775226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.512 [2024-07-26 06:27:44.775247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.512 [2024-07-26 06:27:44.779410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.512 [2024-07-26 06:27:44.788705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.512 [2024-07-26 06:27:44.789229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-07-26 06:27:44.789270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.512 [2024-07-26 06:27:44.789295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.512 [2024-07-26 06:27:44.789584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.512 [2024-07-26 06:27:44.789875] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.512 [2024-07-26 06:27:44.789906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.512 [2024-07-26 06:27:44.789927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.512 [2024-07-26 06:27:44.794132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.512 [2024-07-26 06:27:44.803171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.512 [2024-07-26 06:27:44.803658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-07-26 06:27:44.803698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.512 [2024-07-26 06:27:44.803724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.512 [2024-07-26 06:27:44.804013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.512 [2024-07-26 06:27:44.804314] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.512 [2024-07-26 06:27:44.804346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.512 [2024-07-26 06:27:44.804367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.512 [2024-07-26 06:27:44.808526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.512 [2024-07-26 06:27:44.817793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.512 [2024-07-26 06:27:44.818306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-07-26 06:27:44.818346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.512 [2024-07-26 06:27:44.818371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.512 [2024-07-26 06:27:44.818659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.512 [2024-07-26 06:27:44.818948] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.512 [2024-07-26 06:27:44.818979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.513 [2024-07-26 06:27:44.819001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.513 [2024-07-26 06:27:44.823162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.513 [2024-07-26 06:27:44.832412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.513 [2024-07-26 06:27:44.832899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-07-26 06:27:44.832933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.513 [2024-07-26 06:27:44.832955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.513 [2024-07-26 06:27:44.833271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.513 [2024-07-26 06:27:44.833559] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.513 [2024-07-26 06:27:44.833591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.513 [2024-07-26 06:27:44.833613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.513 [2024-07-26 06:27:44.837767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.773 [2024-07-26 06:27:44.847030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.773 [2024-07-26 06:27:44.847647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.773 [2024-07-26 06:27:44.847689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.773 [2024-07-26 06:27:44.847715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.773 [2024-07-26 06:27:44.848004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.773 [2024-07-26 06:27:44.848316] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.773 [2024-07-26 06:27:44.848349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.773 [2024-07-26 06:27:44.848371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.773 [2024-07-26 06:27:44.852532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.773 [2024-07-26 06:27:44.861566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.773 [2024-07-26 06:27:44.862068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.773 [2024-07-26 06:27:44.862109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.773 [2024-07-26 06:27:44.862135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.773 [2024-07-26 06:27:44.862425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.773 [2024-07-26 06:27:44.862715] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.773 [2024-07-26 06:27:44.862746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.773 [2024-07-26 06:27:44.862768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.773 [2024-07-26 06:27:44.866932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 308224 Killed "${NVMF_APP[@]}" "$@" 00:35:33.773 06:27:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:33.773 06:27:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:33.773 06:27:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:33.773 06:27:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:33.773 06:27:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:33.773 06:27:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=309438 00:35:33.773 06:27:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:33.773 06:27:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 309438 00:35:33.773 06:27:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 309438 ']' 00:35:33.773 06:27:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:33.773 06:27:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:33.773 06:27:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:33.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:33.773 06:27:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:33.773 06:27:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:33.773 [2024-07-26 06:27:44.876250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.773 [2024-07-26 06:27:44.876736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.773 [2024-07-26 06:27:44.876777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.773 [2024-07-26 06:27:44.876803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.773 [2024-07-26 06:27:44.877100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.773 [2024-07-26 06:27:44.877390] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.773 [2024-07-26 06:27:44.877422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.773 [2024-07-26 06:27:44.877444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.773 [2024-07-26 06:27:44.881602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.773 [2024-07-26 06:27:44.890831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.773 [2024-07-26 06:27:44.891306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.773 [2024-07-26 06:27:44.891346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.773 [2024-07-26 06:27:44.891370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.773 [2024-07-26 06:27:44.891647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.773 [2024-07-26 06:27:44.891927] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.773 [2024-07-26 06:27:44.891957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.773 [2024-07-26 06:27:44.891979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.773 [2024-07-26 06:27:44.895898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.773 [2024-07-26 06:27:44.904913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.773 [2024-07-26 06:27:44.905388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.773 [2024-07-26 06:27:44.905426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.773 [2024-07-26 06:27:44.905449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.773 [2024-07-26 06:27:44.905738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.773 [2024-07-26 06:27:44.905979] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.773 [2024-07-26 06:27:44.906011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.773 [2024-07-26 06:27:44.906032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.773 [2024-07-26 06:27:44.909717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.773 [2024-07-26 06:27:44.918918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.773 [2024-07-26 06:27:44.919569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.773 [2024-07-26 06:27:44.919630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.773 [2024-07-26 06:27:44.919660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.773 [2024-07-26 06:27:44.919955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.773 [2024-07-26 06:27:44.920242] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.773 [2024-07-26 06:27:44.920271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.773 [2024-07-26 06:27:44.920293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.773 [2024-07-26 06:27:44.923915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.773 [2024-07-26 06:27:44.932789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.773 [2024-07-26 06:27:44.933276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.773 [2024-07-26 06:27:44.933313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.773 [2024-07-26 06:27:44.933336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.773 [2024-07-26 06:27:44.933644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.773 [2024-07-26 06:27:44.933889] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.773 [2024-07-26 06:27:44.933914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.773 [2024-07-26 06:27:44.933932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.773 [2024-07-26 06:27:44.937531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.773 [2024-07-26 06:27:44.946789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.773 [2024-07-26 06:27:44.947301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.773 [2024-07-26 06:27:44.947339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.773 [2024-07-26 06:27:44.947363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.773 [2024-07-26 06:27:44.947648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.773 [2024-07-26 06:27:44.947894] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.773 [2024-07-26 06:27:44.947920] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.773 [2024-07-26 06:27:44.947938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.773 [2024-07-26 06:27:44.951517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.773 [2024-07-26 06:27:44.959465] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:33.773 [2024-07-26 06:27:44.959585] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:33.773 [2024-07-26 06:27:44.960718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.773 [2024-07-26 06:27:44.961204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.773 [2024-07-26 06:27:44.961242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.773 [2024-07-26 06:27:44.961266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.773 [2024-07-26 06:27:44.961584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.773 [2024-07-26 06:27:44.961857] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.773 [2024-07-26 06:27:44.961883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.773 [2024-07-26 06:27:44.961903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.773 [2024-07-26 06:27:44.965582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.773 [2024-07-26 06:27:44.974614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.773 [2024-07-26 06:27:44.975072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.773 [2024-07-26 06:27:44.975109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.773 [2024-07-26 06:27:44.975133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.774 [2024-07-26 06:27:44.975442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.774 [2024-07-26 06:27:44.975688] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.774 [2024-07-26 06:27:44.975713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.774 [2024-07-26 06:27:44.975731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.774 [2024-07-26 06:27:44.979337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.774 [2024-07-26 06:27:44.988561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.774 [2024-07-26 06:27:44.989074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.774 [2024-07-26 06:27:44.989129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.774 [2024-07-26 06:27:44.989154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.774 [2024-07-26 06:27:44.989443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.774 [2024-07-26 06:27:44.989689] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.774 [2024-07-26 06:27:44.989715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.774 [2024-07-26 06:27:44.989734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.774 [2024-07-26 06:27:44.993483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.774 [2024-07-26 06:27:45.002524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.774 [2024-07-26 06:27:45.003019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.774 [2024-07-26 06:27:45.003080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.774 [2024-07-26 06:27:45.003106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.774 [2024-07-26 06:27:45.003386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.774 [2024-07-26 06:27:45.003651] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.774 [2024-07-26 06:27:45.003677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.774 [2024-07-26 06:27:45.003695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.774 [2024-07-26 06:27:45.007373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.774 [2024-07-26 06:27:45.016342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.774 [2024-07-26 06:27:45.016797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.774 [2024-07-26 06:27:45.016833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.774 [2024-07-26 06:27:45.016857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.774 [2024-07-26 06:27:45.017174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.774 [2024-07-26 06:27:45.017445] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.774 [2024-07-26 06:27:45.017471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.774 [2024-07-26 06:27:45.017490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.774 [2024-07-26 06:27:45.021049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.774 [2024-07-26 06:27:45.030287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.774 [2024-07-26 06:27:45.030744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.774 [2024-07-26 06:27:45.030795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.774 [2024-07-26 06:27:45.030820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.774 [2024-07-26 06:27:45.031134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.774 [2024-07-26 06:27:45.031403] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.774 [2024-07-26 06:27:45.031429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.774 [2024-07-26 06:27:45.031447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.774 [2024-07-26 06:27:45.034989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.774 EAL: No free 2048 kB hugepages reported on node 1 00:35:33.774 [2024-07-26 06:27:45.044212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.774 [2024-07-26 06:27:45.044777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.774 [2024-07-26 06:27:45.044813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.774 [2024-07-26 06:27:45.044845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.774 [2024-07-26 06:27:45.045140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.774 [2024-07-26 06:27:45.045403] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.774 [2024-07-26 06:27:45.045430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.774 [2024-07-26 06:27:45.045447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.774 [2024-07-26 06:27:45.049004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.774 [2024-07-26 06:27:45.058834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.774 [2024-07-26 06:27:45.059311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.774 [2024-07-26 06:27:45.059348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.774 [2024-07-26 06:27:45.059371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.774 [2024-07-26 06:27:45.059667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.774 [2024-07-26 06:27:45.059960] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.774 [2024-07-26 06:27:45.059991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.774 [2024-07-26 06:27:45.060013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.774 [2024-07-26 06:27:45.064260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.774 [2024-07-26 06:27:45.073286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.774 [2024-07-26 06:27:45.073775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.774 [2024-07-26 06:27:45.073810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.774 [2024-07-26 06:27:45.073832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.774 [2024-07-26 06:27:45.074153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.774 [2024-07-26 06:27:45.074462] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.774 [2024-07-26 06:27:45.074496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.774 [2024-07-26 06:27:45.074517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.774 [2024-07-26 06:27:45.078690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.774 [2024-07-26 06:27:45.087993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.774 [2024-07-26 06:27:45.088463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.774 [2024-07-26 06:27:45.088504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.774 [2024-07-26 06:27:45.088529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.774 [2024-07-26 06:27:45.088822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.774 [2024-07-26 06:27:45.089146] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.774 [2024-07-26 06:27:45.089190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.774 [2024-07-26 06:27:45.089210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.774 [2024-07-26 06:27:45.093389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.774 [2024-07-26 06:27:45.102571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.774 [2024-07-26 06:27:45.103074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.774 [2024-07-26 06:27:45.103128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:33.774 [2024-07-26 06:27:45.103152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:33.774 [2024-07-26 06:27:45.103414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:33.774 [2024-07-26 06:27:45.103681] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.774 [2024-07-26 06:27:45.103709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.774 [2024-07-26 06:27:45.103746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.774 [2024-07-26 06:27:45.105149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:34.034 [2024-07-26 06:27:45.107958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.034 [2024-07-26 06:27:45.117161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.034 [2024-07-26 06:27:45.117833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.034 [2024-07-26 06:27:45.117885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.034 [2024-07-26 06:27:45.117916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.034 [2024-07-26 06:27:45.118229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.034 [2024-07-26 06:27:45.118533] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.034 [2024-07-26 06:27:45.118566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.034 [2024-07-26 06:27:45.118593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.034 [2024-07-26 06:27:45.122781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.034 [2024-07-26 06:27:45.131900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.034 [2024-07-26 06:27:45.132411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.034 [2024-07-26 06:27:45.132452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.034 [2024-07-26 06:27:45.132478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.034 [2024-07-26 06:27:45.132772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.034 [2024-07-26 06:27:45.133081] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.034 [2024-07-26 06:27:45.133126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.034 [2024-07-26 06:27:45.133151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.034 [2024-07-26 06:27:45.137305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.034 [2024-07-26 06:27:45.146402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.034 [2024-07-26 06:27:45.146908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.034 [2024-07-26 06:27:45.146947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.034 [2024-07-26 06:27:45.146972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.034 [2024-07-26 06:27:45.147287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.034 [2024-07-26 06:27:45.147603] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.034 [2024-07-26 06:27:45.147635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.034 [2024-07-26 06:27:45.147657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.034 [2024-07-26 06:27:45.151903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.035 [2024-07-26 06:27:45.161183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.035 [2024-07-26 06:27:45.161700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.035 [2024-07-26 06:27:45.161751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.035 [2024-07-26 06:27:45.161775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.035 [2024-07-26 06:27:45.162110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.035 [2024-07-26 06:27:45.162410] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.035 [2024-07-26 06:27:45.162457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.035 [2024-07-26 06:27:45.162478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.035 [2024-07-26 06:27:45.166866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.035 [2024-07-26 06:27:45.175791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.035 [2024-07-26 06:27:45.176313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.035 [2024-07-26 06:27:45.176367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.035 [2024-07-26 06:27:45.176393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.035 [2024-07-26 06:27:45.176687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.035 [2024-07-26 06:27:45.176987] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.035 [2024-07-26 06:27:45.177018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.035 [2024-07-26 06:27:45.177039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.035 [2024-07-26 06:27:45.181235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.035 [2024-07-26 06:27:45.190283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.035 [2024-07-26 06:27:45.190799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.035 [2024-07-26 06:27:45.190847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.035 [2024-07-26 06:27:45.190873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.035 [2024-07-26 06:27:45.191189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.035 [2024-07-26 06:27:45.191488] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.035 [2024-07-26 06:27:45.191520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.035 [2024-07-26 06:27:45.191542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.035 [2024-07-26 06:27:45.195865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.035 [2024-07-26 06:27:45.204957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.035 [2024-07-26 06:27:45.205468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.035 [2024-07-26 06:27:45.205509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.035 [2024-07-26 06:27:45.205534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.035 [2024-07-26 06:27:45.205826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.035 [2024-07-26 06:27:45.206146] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.035 [2024-07-26 06:27:45.206189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.035 [2024-07-26 06:27:45.206208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.035 [2024-07-26 06:27:45.210354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.035 [2024-07-26 06:27:45.219610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.035 [2024-07-26 06:27:45.220194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.035 [2024-07-26 06:27:45.220231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.035 [2024-07-26 06:27:45.220264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.035 [2024-07-26 06:27:45.220574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.035 [2024-07-26 06:27:45.220873] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.035 [2024-07-26 06:27:45.220904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.035 [2024-07-26 06:27:45.220926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.035 [2024-07-26 06:27:45.225212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.035 [2024-07-26 06:27:45.234196] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.035 [2024-07-26 06:27:45.234759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.035 [2024-07-26 06:27:45.234801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.035 [2024-07-26 06:27:45.234827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.035 [2024-07-26 06:27:45.235163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.035 [2024-07-26 06:27:45.235450] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.035 [2024-07-26 06:27:45.235482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.035 [2024-07-26 06:27:45.235504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.035 [2024-07-26 06:27:45.239720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.035 [2024-07-26 06:27:45.248928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.035 [2024-07-26 06:27:45.249675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.035 [2024-07-26 06:27:45.249732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.035 [2024-07-26 06:27:45.249766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.035 [2024-07-26 06:27:45.250102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.035 [2024-07-26 06:27:45.250390] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.035 [2024-07-26 06:27:45.250437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.035 [2024-07-26 06:27:45.250465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.035 [2024-07-26 06:27:45.254651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.035 [2024-07-26 06:27:45.263466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.035 [2024-07-26 06:27:45.263948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.035 [2024-07-26 06:27:45.263999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.035 [2024-07-26 06:27:45.264022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.035 [2024-07-26 06:27:45.264324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.035 [2024-07-26 06:27:45.264636] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.035 [2024-07-26 06:27:45.264668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.035 [2024-07-26 06:27:45.264690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.035 [2024-07-26 06:27:45.268871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.035 [2024-07-26 06:27:45.278009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.035 [2024-07-26 06:27:45.278546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.035 [2024-07-26 06:27:45.278587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.035 [2024-07-26 06:27:45.278613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.035 [2024-07-26 06:27:45.278918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.035 [2024-07-26 06:27:45.279227] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.035 [2024-07-26 06:27:45.279255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.035 [2024-07-26 06:27:45.279279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.035 [2024-07-26 06:27:45.283547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.035 [2024-07-26 06:27:45.292740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.035 [2024-07-26 06:27:45.293312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.035 [2024-07-26 06:27:45.293368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.035 [2024-07-26 06:27:45.293394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.035 [2024-07-26 06:27:45.293692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.035 [2024-07-26 06:27:45.293992] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.035 [2024-07-26 06:27:45.294023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.035 [2024-07-26 06:27:45.294049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.036 [2024-07-26 06:27:45.298317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.036 [2024-07-26 06:27:45.307228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.036 [2024-07-26 06:27:45.307784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.036 [2024-07-26 06:27:45.307825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.036 [2024-07-26 06:27:45.307851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.036 [2024-07-26 06:27:45.308168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.036 [2024-07-26 06:27:45.308447] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.036 [2024-07-26 06:27:45.308479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.036 [2024-07-26 06:27:45.308501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.036 [2024-07-26 06:27:45.312622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.036 [2024-07-26 06:27:45.321857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.036 [2024-07-26 06:27:45.322372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.036 [2024-07-26 06:27:45.322408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.036 [2024-07-26 06:27:45.322431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.036 [2024-07-26 06:27:45.322731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.036 [2024-07-26 06:27:45.323024] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.036 [2024-07-26 06:27:45.323055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.036 [2024-07-26 06:27:45.323091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.036 [2024-07-26 06:27:45.327267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.036 [2024-07-26 06:27:45.336313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.036 [2024-07-26 06:27:45.336839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.036 [2024-07-26 06:27:45.336879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.036 [2024-07-26 06:27:45.336904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.036 [2024-07-26 06:27:45.337210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.036 [2024-07-26 06:27:45.337507] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.036 [2024-07-26 06:27:45.337539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.036 [2024-07-26 06:27:45.337561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.036 [2024-07-26 06:27:45.341740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.036 [2024-07-26 06:27:45.351003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.036 [2024-07-26 06:27:45.351536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.036 [2024-07-26 06:27:45.351572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.036 [2024-07-26 06:27:45.351595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.036 [2024-07-26 06:27:45.351912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.036 [2024-07-26 06:27:45.352225] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.036 [2024-07-26 06:27:45.352266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.036 [2024-07-26 06:27:45.352284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.036 [2024-07-26 06:27:45.356571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.036 [2024-07-26 06:27:45.364124] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:34.036 [2024-07-26 06:27:45.364179] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:34.036 [2024-07-26 06:27:45.364207] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:34.036 [2024-07-26 06:27:45.364240] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:34.036 [2024-07-26 06:27:45.364259] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:34.036 [2024-07-26 06:27:45.364357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:34.036 [2024-07-26 06:27:45.364392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:34.036 [2024-07-26 06:27:45.364402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:34.036 [2024-07-26 06:27:45.365605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.036 [2024-07-26 06:27:45.366108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.036 [2024-07-26 06:27:45.366146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.036 [2024-07-26 06:27:45.366170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.036 [2024-07-26 06:27:45.366456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.036 [2024-07-26 06:27:45.366741] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.036 [2024-07-26 06:27:45.366770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.036 [2024-07-26 06:27:45.366796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.296 [2024-07-26 06:27:45.370766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.296 [2024-07-26 06:27:45.379801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.296 [2024-07-26 06:27:45.380538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.296 [2024-07-26 06:27:45.380593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.296 [2024-07-26 06:27:45.380624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.296 [2024-07-26 06:27:45.380919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.296 [2024-07-26 06:27:45.381223] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.296 [2024-07-26 06:27:45.381254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.296 [2024-07-26 06:27:45.381295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.296 [2024-07-26 06:27:45.385152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.296 [2024-07-26 06:27:45.393997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.296 [2024-07-26 06:27:45.394475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.296 [2024-07-26 06:27:45.394511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.296 [2024-07-26 06:27:45.394550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.296 [2024-07-26 06:27:45.394829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.296 [2024-07-26 06:27:45.395123] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.296 [2024-07-26 06:27:45.395152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.296 [2024-07-26 06:27:45.395187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.296 [2024-07-26 06:27:45.399023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.296 [2024-07-26 06:27:45.408313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.296 [2024-07-26 06:27:45.408788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.296 [2024-07-26 06:27:45.408825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.296 [2024-07-26 06:27:45.408848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.296 [2024-07-26 06:27:45.409165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.296 [2024-07-26 06:27:45.409455] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.296 [2024-07-26 06:27:45.409482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.296 [2024-07-26 06:27:45.409502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.296 [2024-07-26 06:27:45.413389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.296 [2024-07-26 06:27:45.422421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.296 [2024-07-26 06:27:45.422911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.296 [2024-07-26 06:27:45.422948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.296 [2024-07-26 06:27:45.422971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.296 [2024-07-26 06:27:45.423250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.296 [2024-07-26 06:27:45.423533] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.296 [2024-07-26 06:27:45.423562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.296 [2024-07-26 06:27:45.423581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.296 [2024-07-26 06:27:45.427428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.296 [2024-07-26 06:27:45.436577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.296 [2024-07-26 06:27:45.437052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.296 [2024-07-26 06:27:45.437106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.296 [2024-07-26 06:27:45.437130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.296 [2024-07-26 06:27:45.437413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.296 [2024-07-26 06:27:45.437673] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.296 [2024-07-26 06:27:45.437701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.296 [2024-07-26 06:27:45.437722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.296 [2024-07-26 06:27:45.441555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.296 [2024-07-26 06:27:45.450822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.296 [2024-07-26 06:27:45.451453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.296 [2024-07-26 06:27:45.451503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.296 [2024-07-26 06:27:45.451533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.296 [2024-07-26 06:27:45.451830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.296 [2024-07-26 06:27:45.452129] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.296 [2024-07-26 06:27:45.452160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.296 [2024-07-26 06:27:45.452186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.297 [2024-07-26 06:27:45.455984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.297 [2024-07-26 06:27:45.465162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.297 [2024-07-26 06:27:45.465885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.297 [2024-07-26 06:27:45.465934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.297 [2024-07-26 06:27:45.465964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.297 [2024-07-26 06:27:45.466266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.297 [2024-07-26 06:27:45.466557] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.297 [2024-07-26 06:27:45.466586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.297 [2024-07-26 06:27:45.466611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.297 [2024-07-26 06:27:45.470478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.297 [2024-07-26 06:27:45.479553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.297 [2024-07-26 06:27:45.480039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.297 [2024-07-26 06:27:45.480084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.297 [2024-07-26 06:27:45.480119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.297 [2024-07-26 06:27:45.480413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.297 [2024-07-26 06:27:45.480677] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.297 [2024-07-26 06:27:45.480705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.297 [2024-07-26 06:27:45.480725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.297 [2024-07-26 06:27:45.484631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.297 [2024-07-26 06:27:45.493634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.297 [2024-07-26 06:27:45.494109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.297 [2024-07-26 06:27:45.494146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.297 [2024-07-26 06:27:45.494174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.297 [2024-07-26 06:27:45.494456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.297 [2024-07-26 06:27:45.494716] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.297 [2024-07-26 06:27:45.494743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.297 [2024-07-26 06:27:45.494762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.297 [2024-07-26 06:27:45.498605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.297 [2024-07-26 06:27:45.507745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.297 [2024-07-26 06:27:45.508205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.297 [2024-07-26 06:27:45.508241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.297 [2024-07-26 06:27:45.508264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.297 [2024-07-26 06:27:45.508549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.297 [2024-07-26 06:27:45.508807] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.297 [2024-07-26 06:27:45.508839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.297 [2024-07-26 06:27:45.508859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.297 [2024-07-26 06:27:45.512621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.297 [2024-07-26 06:27:45.521854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.297 [2024-07-26 06:27:45.522312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.297 [2024-07-26 06:27:45.522348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.297 [2024-07-26 06:27:45.522371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.297 [2024-07-26 06:27:45.522649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.297 [2024-07-26 06:27:45.522907] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.297 [2024-07-26 06:27:45.522934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.297 [2024-07-26 06:27:45.522954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.297 [2024-07-26 06:27:45.526740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.297 [2024-07-26 06:27:45.535917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.297 [2024-07-26 06:27:45.536407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.297 [2024-07-26 06:27:45.536443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.297 [2024-07-26 06:27:45.536466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.297 [2024-07-26 06:27:45.536741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.297 [2024-07-26 06:27:45.536997] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.297 [2024-07-26 06:27:45.537025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.297 [2024-07-26 06:27:45.537068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.297 [2024-07-26 06:27:45.540786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.297 [2024-07-26 06:27:45.549940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.297 [2024-07-26 06:27:45.550383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.297 [2024-07-26 06:27:45.550434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.297 [2024-07-26 06:27:45.550457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.297 [2024-07-26 06:27:45.550747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.297 [2024-07-26 06:27:45.551003] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.297 [2024-07-26 06:27:45.551030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.297 [2024-07-26 06:27:45.551077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.297 [2024-07-26 06:27:45.554793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.297 [2024-07-26 06:27:45.563984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.297 [2024-07-26 06:27:45.564434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.297 [2024-07-26 06:27:45.564471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.297 [2024-07-26 06:27:45.564494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.297 [2024-07-26 06:27:45.564768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.297 [2024-07-26 06:27:45.565021] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.297 [2024-07-26 06:27:45.565048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.297 [2024-07-26 06:27:45.565092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.297 [2024-07-26 06:27:45.568870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.297 [2024-07-26 06:27:45.578011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.297 [2024-07-26 06:27:45.578567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.297 [2024-07-26 06:27:45.578603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.297 [2024-07-26 06:27:45.578625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.297 [2024-07-26 06:27:45.578900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.297 [2024-07-26 06:27:45.579187] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.297 [2024-07-26 06:27:45.579227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.297 [2024-07-26 06:27:45.579246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.297 [2024-07-26 06:27:45.583073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.297 [2024-07-26 06:27:45.592199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.297 [2024-07-26 06:27:45.592931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.297 [2024-07-26 06:27:45.592999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.297 [2024-07-26 06:27:45.593030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.297 [2024-07-26 06:27:45.593324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.297 [2024-07-26 06:27:45.593613] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.297 [2024-07-26 06:27:45.593643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.298 [2024-07-26 06:27:45.593669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.298 [2024-07-26 06:27:45.597585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.298 [2024-07-26 06:27:45.606476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.298 [2024-07-26 06:27:45.607218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.298 [2024-07-26 06:27:45.607270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.298 [2024-07-26 06:27:45.607302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.298 [2024-07-26 06:27:45.607601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.298 [2024-07-26 06:27:45.607870] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.298 [2024-07-26 06:27:45.607899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.298 [2024-07-26 06:27:45.607923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.298 [2024-07-26 06:27:45.611735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.298 [2024-07-26 06:27:45.620610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.298 [2024-07-26 06:27:45.621073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.298 [2024-07-26 06:27:45.621109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.298 [2024-07-26 06:27:45.621133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.298 [2024-07-26 06:27:45.621414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.298 [2024-07-26 06:27:45.621674] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.298 [2024-07-26 06:27:45.621702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.298 [2024-07-26 06:27:45.621721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.298 [2024-07-26 06:27:45.625655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.558 [2024-07-26 06:27:45.635010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.558 [2024-07-26 06:27:45.635478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.558 [2024-07-26 06:27:45.635515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.558 [2024-07-26 06:27:45.635538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.558 [2024-07-26 06:27:45.635816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.558 [2024-07-26 06:27:45.636103] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.558 [2024-07-26 06:27:45.636133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.558 [2024-07-26 06:27:45.636153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.558 [2024-07-26 06:27:45.639969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.558 [2024-07-26 06:27:45.649163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.558 [2024-07-26 06:27:45.649665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.558 [2024-07-26 06:27:45.649701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.558 [2024-07-26 06:27:45.649724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.558 [2024-07-26 06:27:45.650007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.558 [2024-07-26 06:27:45.650303] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.558 [2024-07-26 06:27:45.650337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.558 [2024-07-26 06:27:45.650358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.558 [2024-07-26 06:27:45.654251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.558 [2024-07-26 06:27:45.663355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.558 [2024-07-26 06:27:45.663777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.558 [2024-07-26 06:27:45.663814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.558 [2024-07-26 06:27:45.663837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.558 [2024-07-26 06:27:45.664143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.558 [2024-07-26 06:27:45.664423] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.558 [2024-07-26 06:27:45.664451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.558 [2024-07-26 06:27:45.664469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.558 [2024-07-26 06:27:45.668322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.558 [2024-07-26 06:27:45.677632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.558 [2024-07-26 06:27:45.678078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.558 [2024-07-26 06:27:45.678118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.558 [2024-07-26 06:27:45.678141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.558 [2024-07-26 06:27:45.678422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.558 [2024-07-26 06:27:45.678680] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.558 [2024-07-26 06:27:45.678708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.558 [2024-07-26 06:27:45.678727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.558 [2024-07-26 06:27:45.682624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.558 [2024-07-26 06:27:45.691812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.558 [2024-07-26 06:27:45.692355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.558 [2024-07-26 06:27:45.692397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.558 [2024-07-26 06:27:45.692423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.558 [2024-07-26 06:27:45.692692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.558 [2024-07-26 06:27:45.692960] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.558 [2024-07-26 06:27:45.692989] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.558 [2024-07-26 06:27:45.693010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.558 [2024-07-26 06:27:45.697068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.558 [2024-07-26 06:27:45.705978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.558 [2024-07-26 06:27:45.706451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.558 [2024-07-26 06:27:45.706488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.558 [2024-07-26 06:27:45.706511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.558 [2024-07-26 06:27:45.706793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.558 [2024-07-26 06:27:45.707081] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.558 [2024-07-26 06:27:45.707110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.558 [2024-07-26 06:27:45.707130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.558 [2024-07-26 06:27:45.710933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.558 [2024-07-26 06:27:45.720397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.558 [2024-07-26 06:27:45.720830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.558 [2024-07-26 06:27:45.720866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.558 [2024-07-26 06:27:45.720889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.558 [2024-07-26 06:27:45.721195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.558 [2024-07-26 06:27:45.721479] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.558 [2024-07-26 06:27:45.721507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.558 [2024-07-26 06:27:45.721526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.558 [2024-07-26 06:27:45.725392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.558 [2024-07-26 06:27:45.734570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.558 [2024-07-26 06:27:45.735018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.558 [2024-07-26 06:27:45.735072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.558 [2024-07-26 06:27:45.735097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.558 [2024-07-26 06:27:45.735369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.558 [2024-07-26 06:27:45.735651] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.558 [2024-07-26 06:27:45.735678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.558 [2024-07-26 06:27:45.735696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.558 [2024-07-26 06:27:45.739539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.558 [2024-07-26 06:27:45.748622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.558 [2024-07-26 06:27:45.749078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.558 [2024-07-26 06:27:45.749115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.558 [2024-07-26 06:27:45.749143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.558 [2024-07-26 06:27:45.749419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.558 [2024-07-26 06:27:45.749682] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.558 [2024-07-26 06:27:45.749709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.558 [2024-07-26 06:27:45.749728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.558 [2024-07-26 06:27:45.753541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.559 [2024-07-26 06:27:45.762809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.559 [2024-07-26 06:27:45.763261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.559 [2024-07-26 06:27:45.763299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.559 [2024-07-26 06:27:45.763321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.559 [2024-07-26 06:27:45.763605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.559 [2024-07-26 06:27:45.763861] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.559 [2024-07-26 06:27:45.763888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.559 [2024-07-26 06:27:45.763907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.559 [2024-07-26 06:27:45.767657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.559 [2024-07-26 06:27:45.776810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.559 [2024-07-26 06:27:45.777289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.559 [2024-07-26 06:27:45.777326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.559 [2024-07-26 06:27:45.777369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.559 [2024-07-26 06:27:45.777645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.559 [2024-07-26 06:27:45.777898] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.559 [2024-07-26 06:27:45.777925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.559 [2024-07-26 06:27:45.777944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.559 [2024-07-26 06:27:45.781705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.559 [2024-07-26 06:27:45.790894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.559 [2024-07-26 06:27:45.791380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.559 [2024-07-26 06:27:45.791426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.559 [2024-07-26 06:27:45.791450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.559 [2024-07-26 06:27:45.791723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.559 [2024-07-26 06:27:45.791975] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.559 [2024-07-26 06:27:45.792007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.559 [2024-07-26 06:27:45.792027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.559 [2024-07-26 06:27:45.795782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.559 [2024-07-26 06:27:45.805012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.559 [2024-07-26 06:27:45.805480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.559 [2024-07-26 06:27:45.805527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.559 [2024-07-26 06:27:45.805551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.559 [2024-07-26 06:27:45.805826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.559 [2024-07-26 06:27:45.806112] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.559 [2024-07-26 06:27:45.806142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.559 [2024-07-26 06:27:45.806161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.559 [2024-07-26 06:27:45.809884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.559 [2024-07-26 06:27:45.819136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.559 [2024-07-26 06:27:45.819623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.559 [2024-07-26 06:27:45.819660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.559 [2024-07-26 06:27:45.819683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.559 [2024-07-26 06:27:45.819960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.559 [2024-07-26 06:27:45.820252] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.559 [2024-07-26 06:27:45.820281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.559 [2024-07-26 06:27:45.820316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.559 [2024-07-26 06:27:45.823995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.559 [2024-07-26 06:27:45.833185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.559 [2024-07-26 06:27:45.833633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.559 [2024-07-26 06:27:45.833670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.559 [2024-07-26 06:27:45.833692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.559 [2024-07-26 06:27:45.833974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.559 [2024-07-26 06:27:45.834275] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.559 [2024-07-26 06:27:45.834305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.559 [2024-07-26 06:27:45.834325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.559 [2024-07-26 06:27:45.838096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.559 [2024-07-26 06:27:45.847271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.559 [2024-07-26 06:27:45.847757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.559 [2024-07-26 06:27:45.847794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.559 [2024-07-26 06:27:45.847816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.559 [2024-07-26 06:27:45.848130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.559 [2024-07-26 06:27:45.848419] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.559 [2024-07-26 06:27:45.848456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.559 [2024-07-26 06:27:45.848490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.559 [2024-07-26 06:27:45.852226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.559 [2024-07-26 06:27:45.861384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.559 [2024-07-26 06:27:45.861834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.559 [2024-07-26 06:27:45.861869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.559 [2024-07-26 06:27:45.861892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.559 [2024-07-26 06:27:45.862165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.559 [2024-07-26 06:27:45.862449] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.559 [2024-07-26 06:27:45.862477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.559 [2024-07-26 06:27:45.862495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.559 [2024-07-26 06:27:45.866188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.559 [2024-07-26 06:27:45.875397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.559 [2024-07-26 06:27:45.875868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.559 [2024-07-26 06:27:45.875905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.559 [2024-07-26 06:27:45.875928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.559 [2024-07-26 06:27:45.876198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.559 [2024-07-26 06:27:45.876476] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.559 [2024-07-26 06:27:45.876504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.559 [2024-07-26 06:27:45.876523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.559 [2024-07-26 06:27:45.880238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.559 [2024-07-26 06:27:45.889602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.559 [2024-07-26 06:27:45.890025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.559 [2024-07-26 06:27:45.890073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.559 [2024-07-26 06:27:45.890105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.559 [2024-07-26 06:27:45.890371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.819 [2024-07-26 06:27:45.890636] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.819 [2024-07-26 06:27:45.890664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.819 [2024-07-26 06:27:45.890683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.819 [2024-07-26 06:27:45.894472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.819 [2024-07-26 06:27:45.903601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.819 [2024-07-26 06:27:45.904016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-07-26 06:27:45.904052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.819 [2024-07-26 06:27:45.904085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.819 [2024-07-26 06:27:45.904361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.819 [2024-07-26 06:27:45.904615] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.819 [2024-07-26 06:27:45.904643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.819 [2024-07-26 06:27:45.904663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.819 [2024-07-26 06:27:45.908390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.819 [2024-07-26 06:27:45.917759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.819 [2024-07-26 06:27:45.918211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-07-26 06:27:45.918247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.819 [2024-07-26 06:27:45.918270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.819 [2024-07-26 06:27:45.918530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.819 [2024-07-26 06:27:45.918792] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.819 [2024-07-26 06:27:45.918820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.819 [2024-07-26 06:27:45.918840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.819 [2024-07-26 06:27:45.922676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.819 [2024-07-26 06:27:45.932117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.819 [2024-07-26 06:27:45.932603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-07-26 06:27:45.932640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.819 [2024-07-26 06:27:45.932662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.819 06:27:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:34.819 [2024-07-26 06:27:45.932924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.819 06:27:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:35:34.819 [2024-07-26 06:27:45.933209] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.819 06:27:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:34.819 [2024-07-26 06:27:45.933237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.819 [2024-07-26 06:27:45.933257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.819 06:27:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:34.819 06:27:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:34.819 [2024-07-26 06:27:45.937110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.819 [2024-07-26 06:27:45.946264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.819 [2024-07-26 06:27:45.946727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-07-26 06:27:45.946768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.819 [2024-07-26 06:27:45.946792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.819 [2024-07-26 06:27:45.947083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.819 [2024-07-26 06:27:45.947348] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.819 [2024-07-26 06:27:45.947387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.819 [2024-07-26 06:27:45.947406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.819 06:27:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:34.819 06:27:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:34.819 06:27:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.819 06:27:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:34.819 [2024-07-26 06:27:45.951370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.819 [2024-07-26 06:27:45.956502] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:34.819 [2024-07-26 06:27:45.960713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.819 [2024-07-26 06:27:45.961237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-07-26 06:27:45.961274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.819 [2024-07-26 06:27:45.961297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.819 [2024-07-26 06:27:45.961592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.820 [2024-07-26 06:27:45.961832] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.820 [2024-07-26 06:27:45.961858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.820 [2024-07-26 06:27:45.961875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.820 [2024-07-26 06:27:45.965622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.820 [2024-07-26 06:27:45.974525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.820 [2024-07-26 06:27:45.975120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-07-26 06:27:45.975156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.820 [2024-07-26 06:27:45.975179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.820 [2024-07-26 06:27:45.975472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.820 [2024-07-26 06:27:45.975724] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.820 [2024-07-26 06:27:45.975749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.820 [2024-07-26 06:27:45.975767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.820 [2024-07-26 06:27:45.979443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.820 06:27:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.820 06:27:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:34.820 06:27:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.820 06:27:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:34.820 [2024-07-26 06:27:45.988834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.820 [2024-07-26 06:27:45.989390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-07-26 06:27:45.989428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.820 [2024-07-26 06:27:45.989470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.820 [2024-07-26 06:27:45.989754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.820 [2024-07-26 06:27:45.990017] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.820 [2024-07-26 06:27:45.990055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.820 [2024-07-26 06:27:45.990105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.820 [2024-07-26 06:27:45.993957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.820 [2024-07-26 06:27:46.003090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.820 [2024-07-26 06:27:46.003752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-07-26 06:27:46.003799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.820 [2024-07-26 06:27:46.003828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.820 [2024-07-26 06:27:46.004149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.820 [2024-07-26 06:27:46.004440] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.820 [2024-07-26 06:27:46.004469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.820 [2024-07-26 06:27:46.004492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.820 [2024-07-26 06:27:46.008292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.820 [2024-07-26 06:27:46.017225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.820 [2024-07-26 06:27:46.017681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-07-26 06:27:46.017717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.820 [2024-07-26 06:27:46.017740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.820 [2024-07-26 06:27:46.018022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.820 [2024-07-26 06:27:46.018329] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.820 [2024-07-26 06:27:46.018372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.820 [2024-07-26 06:27:46.018398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.820 [2024-07-26 06:27:46.022170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.820 [2024-07-26 06:27:46.031453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.820 [2024-07-26 06:27:46.031935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-07-26 06:27:46.031971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.820 [2024-07-26 06:27:46.031994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.820 [2024-07-26 06:27:46.032272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.820 [2024-07-26 06:27:46.032554] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.820 [2024-07-26 06:27:46.032582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.820 [2024-07-26 06:27:46.032601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.820 [2024-07-26 06:27:46.036439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.820 [2024-07-26 06:27:46.045467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.820 [2024-07-26 06:27:46.045882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-07-26 06:27:46.045919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.820 [2024-07-26 06:27:46.045942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.820 [2024-07-26 06:27:46.046217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.820 [2024-07-26 06:27:46.046501] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.820 [2024-07-26 06:27:46.046528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.820 [2024-07-26 06:27:46.046547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.820 [2024-07-26 06:27:46.050233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.820 Malloc0 00:35:34.820 06:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.820 06:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:34.820 06:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.820 06:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:34.820 [2024-07-26 06:27:46.059690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.820 [2024-07-26 06:27:46.060153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-07-26 06:27:46.060190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.820 [2024-07-26 06:27:46.060213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.820 [2024-07-26 06:27:46.060489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.820 [2024-07-26 06:27:46.060747] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.820 [2024-07-26 06:27:46.060774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.820 [2024-07-26 06:27:46.060793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.820 06:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.820 06:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:34.820 06:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.820 06:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:34.820 [2024-07-26 06:27:46.064644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.820 06:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.820 06:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:34.820 06:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.820 06:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:34.820 [2024-07-26 06:27:46.073900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.820 [2024-07-26 06:27:46.074343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-07-26 06:27:46.074380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:35:34.820 [2024-07-26 06:27:46.074402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:35:34.820 [2024-07-26 06:27:46.074680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:35:34.820 [2024-07-26 06:27:46.074937] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.820 [2024-07-26 06:27:46.074965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.820 [2024-07-26 06:27:46.074984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.820 [2024-07-26 06:27:46.075832] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:34.821 [2024-07-26 06:27:46.078824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.821 06:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.821 06:27:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 308655 00:35:34.821 [2024-07-26 06:27:46.087998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.821 [2024-07-26 06:27:46.136866] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:44.789 00:35:44.789 Latency(us) 00:35:44.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:44.789 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:44.789 Verification LBA range: start 0x0 length 0x4000 00:35:44.789 Nvme1n1 : 15.01 4412.10 17.23 9563.29 0.00 9131.18 1177.22 33787.45 00:35:44.789 =================================================================================================================== 00:35:44.789 Total : 4412.10 17.23 9563.29 0.00 9131.18 1177.22 33787.45 00:35:44.789 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:44.789 rmmod nvme_tcp 00:35:44.789 rmmod nvme_fabrics 00:35:44.789 rmmod nvme_keyring 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 309438 ']' 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 309438 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 309438 ']' 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 309438 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 309438 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 309438' 00:35:44.789 killing process with pid 309438 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 309438 00:35:44.789 06:27:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 309438 00:35:46.165 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:35:46.165 06:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:46.165 06:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:46.165 06:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:46.165 06:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:46.165 06:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:46.165 06:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:46.165 06:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:46.165 06:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:48.073 06:27:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:48.073 00:35:48.073 real 0m26.666s 00:35:48.073 user 1m13.298s 00:35:48.073 sys 0m4.574s 00:35:48.073 06:27:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:48.073 06:27:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:48.073 ************************************ 00:35:48.073 END TEST nvmf_bdevperf 00:35:48.073 ************************************ 00:35:48.397 06:27:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:48.397 06:27:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:48.397 06:27:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:48.397 06:27:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.397 ************************************ 00:35:48.397 START TEST nvmf_target_disconnect 00:35:48.397 ************************************ 00:35:48.397 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:48.397 * Looking for test storage... 00:35:48.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:48.397 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:48.397 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:48.397 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:48.397 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:48.397 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:48.397 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:48.397 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:48.397 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:48.397 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:48.397 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:48.397 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:48.397 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:48.397 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:48.397 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:48.397 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:48.397 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:48.397 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:35:48.398 06:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:50.324 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:50.324 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:50.324 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:50.325 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:50.325 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:50.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:50.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:35:50.325 00:35:50.325 --- 10.0.0.2 ping statistics --- 00:35:50.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:50.325 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:50.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:50.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:35:50.325 00:35:50.325 --- 10.0.0.1 ping statistics --- 00:35:50.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:50.325 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:50.325 ************************************ 00:35:50.325 START TEST nvmf_target_disconnect_tc1 00:35:50.325 ************************************ 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:35:50.325 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:50.584 EAL: No free 2048 kB hugepages reported on node 1 00:35:50.584 [2024-07-26 06:28:01.750872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-07-26 06:28:01.751006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2000 with addr=10.0.0.2, port=4420 00:35:50.584 [2024-07-26 06:28:01.751134] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:50.584 [2024-07-26 06:28:01.751178] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:50.584 [2024-07-26 06:28:01.751211] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:35:50.584 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:35:50.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:50.584 Initializing NVMe Controllers 00:35:50.584 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:35:50.584 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:50.584 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:50.584 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:50.584 00:35:50.584 real 0m0.214s 00:35:50.584 user 0m0.085s 00:35:50.584 sys 0m0.128s 00:35:50.584 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:50.584 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:50.584 ************************************ 00:35:50.584 END TEST nvmf_target_disconnect_tc1 00:35:50.584 ************************************ 00:35:50.584 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:35:50.584 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:50.584 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:50.584 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:50.584 ************************************ 00:35:50.584 START TEST nvmf_target_disconnect_tc2 00:35:50.584 ************************************ 00:35:50.584 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:35:50.584 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:35:50.584 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:50.584 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:50.584 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:50.584 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.584 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=312855 00:35:50.584 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:50.584 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 312855 00:35:50.584 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 312855 ']' 00:35:50.584 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:50.584 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:50.584 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:50.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:50.584 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:50.584 06:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.842 [2024-07-26 06:28:01.929271] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:50.842 [2024-07-26 06:28:01.929424] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:50.842 EAL: No free 2048 kB hugepages reported on node 1 00:35:50.842 [2024-07-26 06:28:02.061253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:51.099 [2024-07-26 06:28:02.276265] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:51.099 [2024-07-26 06:28:02.276328] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:51.099 [2024-07-26 06:28:02.276364] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:51.099 [2024-07-26 06:28:02.276382] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:51.099 [2024-07-26 06:28:02.276399] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:51.099 [2024-07-26 06:28:02.276530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:35:51.099 [2024-07-26 06:28:02.276578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:35:51.099 [2024-07-26 06:28:02.276634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:35:51.099 [2024-07-26 06:28:02.276643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:35:51.666 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:51.666 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:35:51.666 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:51.666 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:51.666 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.666 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:51.666 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:51.666 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.666 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.666 Malloc0 00:35:51.666 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.666 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:51.666 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.666 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.666 [2024-07-26 06:28:02.956022] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:51.666 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.666 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:51.666 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.666 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.666 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.666 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:51.666 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.667 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.667 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.667 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:51.667 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.667 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.667 [2024-07-26 06:28:02.985701] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:51.667 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.667 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:51.667 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.667 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.667 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.667 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=313008 00:35:51.667 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:35:51.667 06:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:51.927 EAL: No free 2048 kB hugepages reported on node 1 00:35:53.842 06:28:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 312855 00:35:53.842 06:28:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Write completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Write completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Write completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Write completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Write completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Write completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Write completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Write completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Write completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Write completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 [2024-07-26 06:28:05.022548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Write completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Write completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Write completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Write completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Write completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Write completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Write completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 [2024-07-26 06:28:05.023227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Write completed with error (sct=0, sc=8) 00:35:53.842 starting I/O failed 00:35:53.842 Read completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Read completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Write completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Write completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Write completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Read completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Write completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Write completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Read completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Write completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Read completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Read completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Write completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Write completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Read completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Read completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Read completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Write completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 [2024-07-26 06:28:05.023865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.843 Read completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Read completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Read completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Read completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Read completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Write completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Write completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Write completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Write completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Read completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Write completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Read completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Write completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Read completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Write completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Write completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Read completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Read completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Read completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Write completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Write completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Read completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Read completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Read completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Write completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Write completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Write completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Read completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Write completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Read completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Write completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 Read completed with error (sct=0, sc=8) 00:35:53.843 starting I/O failed 00:35:53.843 [2024-07-26 06:28:05.024577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.843 [2024-07-26 06:28:05.024768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.843 [2024-07-26 06:28:05.024812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.843 qpair failed and we were unable to recover it. 00:35:53.843 [2024-07-26 06:28:05.024972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.843 [2024-07-26 06:28:05.025009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.843 qpair failed and we were unable to recover it. 00:35:53.843 [2024-07-26 06:28:05.025207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.843 [2024-07-26 06:28:05.025242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.843 qpair failed and we were unable to recover it. 00:35:53.843 [2024-07-26 06:28:05.025444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.843 [2024-07-26 06:28:05.025481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.843 qpair failed and we were unable to recover it. 00:35:53.843 [2024-07-26 06:28:05.025669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.843 [2024-07-26 06:28:05.025707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.843 qpair failed and we were unable to recover it. 00:35:53.843 [2024-07-26 06:28:05.025880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.843 [2024-07-26 06:28:05.025913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.843 qpair failed and we were unable to recover it. 00:35:53.843 [2024-07-26 06:28:05.026091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.843 [2024-07-26 06:28:05.026125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.843 qpair failed and we were unable to recover it. 00:35:53.843 [2024-07-26 06:28:05.026271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.843 [2024-07-26 06:28:05.026306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.843 qpair failed and we were unable to recover it. 00:35:53.843 [2024-07-26 06:28:05.026517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.843 [2024-07-26 06:28:05.026553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.843 qpair failed and we were unable to recover it. 00:35:53.843 [2024-07-26 06:28:05.026745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.843 [2024-07-26 06:28:05.026779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.843 qpair failed and we were unable to recover it. 00:35:53.843 [2024-07-26 06:28:05.027696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.843 [2024-07-26 06:28:05.027739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.843 qpair failed and we were unable to recover it. 00:35:53.843 [2024-07-26 06:28:05.027951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.843 [2024-07-26 06:28:05.027990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.843 qpair failed and we were unable to recover it. 00:35:53.843 [2024-07-26 06:28:05.028166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.843 [2024-07-26 06:28:05.028201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.843 qpair failed and we were unable to recover it. 00:35:53.843 [2024-07-26 06:28:05.028346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.843 [2024-07-26 06:28:05.028384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.843 qpair failed and we were unable to recover it. 00:35:53.843 [2024-07-26 06:28:05.028605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.843 [2024-07-26 06:28:05.028647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.028890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.028928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.029089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.029141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.029293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.029326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.029582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.029650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.029838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.029896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.030095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.030149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.030296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.030331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.031213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.031251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.031484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.031517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.031682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.031717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.031876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.031909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.032067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.032102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.032249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.032284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.032499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.032558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.032720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.032771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.032949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.032987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.033198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.033248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.033452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.033492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.033687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.033747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.033923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.033960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.034145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.034179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.034309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.034353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.034546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.034579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.034743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.034794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.035026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.035097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.035240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.035273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.035437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.035470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.035692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.035725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.035917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.035950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.036123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.036162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.036302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.036336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.036492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.036547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.036770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.844 [2024-07-26 06:28:05.036830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.844 qpair failed and we were unable to recover it. 00:35:53.844 [2024-07-26 06:28:05.037033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.037078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.037213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.037246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.037399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.037436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.037601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.037635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.037852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.037889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.038081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.038115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.038238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.038272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.038434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.038468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.038683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.038720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.038880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.038914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.039089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.039123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.039288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.039322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.039520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.039556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.039716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.039749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.039944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.039976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.040124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.040158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.040299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.040333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.040522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.040559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.040740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.040774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.040908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.040942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.041086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.041119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.041269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.041303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.041459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.041492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.041666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.041719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.041870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.041909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.042085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.042119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.042253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.042287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.042541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.042597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.042781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.042817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.043011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.043074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.043261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.043307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.043495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.043548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.043855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.043913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.845 [2024-07-26 06:28:05.044101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.845 [2024-07-26 06:28:05.044136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.845 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.044279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.044312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.044513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.044565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.044788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.044830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.045051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.045090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.045229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.045263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.045470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.045508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.045680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.045718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.045898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.045934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.046134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.046168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.046309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.046342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.046507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.046540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.046723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.046762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.046933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.046980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.047193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.047227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.047382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.047430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.047640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.047695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.047882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.047919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.048112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.048147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.048284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.048318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.048615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.048651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.048807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.048845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.049019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.049057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.049226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.049260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.049420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.049453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.049616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.049652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.049796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.049833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.050031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.050074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.050228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.050262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.050438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.050471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.050670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.050723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.050919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.050958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.051132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.051167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.051301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.051334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.846 [2024-07-26 06:28:05.051550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.846 [2024-07-26 06:28:05.051583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.846 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.051723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.051774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.051920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.051956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.052141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.052174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.052382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.052435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.052625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.052680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.052859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.052910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.053077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.053121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.053254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.053287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.053504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.053560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.053729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.053786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.053956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.053989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.054178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.054212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.054370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.054423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.054602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.054654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.054818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.054852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.055010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.055055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.055235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.055268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.055485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.055535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.055733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.055793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.055964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.055997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.056176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.056210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.056393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.056429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.056618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.056671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.056854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.056887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.057046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.057091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.057265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.057298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.057503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.057564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.057795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.847 [2024-07-26 06:28:05.057834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.847 qpair failed and we were unable to recover it. 00:35:53.847 [2024-07-26 06:28:05.058017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.058054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.058242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.058274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.058445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.058481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.058627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.058663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.058911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.058962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.059130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.059163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.059297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.059329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.059499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.059531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.059724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.059761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.059938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.059974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.060170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.060203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.060384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.060420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.060594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.060648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.060829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.060862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.061025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.061085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.061245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.061277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.061455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.061487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.061691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.061746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.061893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.061929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.062118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.062152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.062356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.062418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.062616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.062656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.062840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.062898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.063099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.063133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.063261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.063294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.063505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.063537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.063736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.063794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.063972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.064005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.064147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.064181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.064330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.064387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.064611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.064671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.064806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.064838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.065040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.065078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.848 [2024-07-26 06:28:05.065253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.848 [2024-07-26 06:28:05.065285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.848 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.065468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.065524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.065690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.065745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.065924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.065960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.066121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.066155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.066316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.066349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.066516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.066549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.066770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.066802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.067006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.067042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.067255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.067288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.067427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.067460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.067637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.067693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.067872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.067908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.068056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.068099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.068256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.068303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.068485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.068519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.068657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.068709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.068913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.068946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.069079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.069116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.069271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.069304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.069471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.069520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.069750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.069782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.069925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.069956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.070119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.070153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.070310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.070391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.070572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.070611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.070815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.070872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.071053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.071132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.071329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.071378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.071572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.071625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.071884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.071938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.072105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.072139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.072284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.072318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.072457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.072492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.849 qpair failed and we were unable to recover it. 00:35:53.849 [2024-07-26 06:28:05.072630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.849 [2024-07-26 06:28:05.072662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.072826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.072858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.073019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.073051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.073227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.073259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.073384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.073416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.073634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.073689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.073926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.073979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.074172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.074204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.074337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.074369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.074501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.074533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.074685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.074717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.074923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.074972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.075111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.075145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.075323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.075363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.075564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.075599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.075791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.075851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.076021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.076056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.076253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.076285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.076477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.076509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.076671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.076719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.076917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.076982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.077169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.077205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.077356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.077410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.077617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.077670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.077836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.077888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.078040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.078080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.078256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.078304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.078529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.078562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.078718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.078750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.078902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.078951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.079120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.079152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.079352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.079403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.079637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.079671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.079849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.850 [2024-07-26 06:28:05.079883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.850 qpair failed and we were unable to recover it. 00:35:53.850 [2024-07-26 06:28:05.080070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.080125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.080277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.080312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.080539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.080572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.080749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.080792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.081002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.081050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.081246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.081281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.081489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.081542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.081752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.081808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.081984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.082018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.082182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.082217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.082346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.082380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.082514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.082548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.082715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.082748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.082910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.082943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.083109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.083142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.083282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.083315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.083459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.083492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.083682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.083716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.083922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.083958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.084116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.084149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.084320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.084384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.084622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.084662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.084844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.084882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.085072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.085136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.085300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.085333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.085526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.085559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.085717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.085775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.086043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.086084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.086243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.086276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.086457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.086519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.086768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.086823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.087009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.087045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.087265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.087300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.087452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.851 [2024-07-26 06:28:05.087489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.851 qpair failed and we were unable to recover it. 00:35:53.851 [2024-07-26 06:28:05.087658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.087717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.087908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.087944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.088104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.088137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.088289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.088322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.088508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.088560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.088728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.088764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.088921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.088957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.089111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.089145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.089307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.089346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.089528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.089565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.089767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.089803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.089974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.090010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.090184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.090217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.090400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.090433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.090708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.090763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.090941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.090977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.091184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.091233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.091425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.091473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.091667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.091720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.091889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.091943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.092121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.092156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.092331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.092383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.092634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.092691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.092893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.092948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.093172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.093206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.093371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.093409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.093645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.093695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.093880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.093940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.094117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.094151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.852 [2024-07-26 06:28:05.094342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.852 [2024-07-26 06:28:05.094396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.852 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.094649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.094707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.094924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.094980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.095173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.095211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.095375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.095408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.095569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.095608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.095833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.095866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.096025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.096068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.096231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.096278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.096490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.096543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.096834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.096890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.097123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.097156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.097285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.097317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.097492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.097542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.097718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.097772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.097970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.098005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.098184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.098216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.098382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.098414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.098600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.098652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.098881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.098936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.099121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.099154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.099324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.099385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.099575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.099629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.099795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.099831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.099983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.100018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.100184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.100216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.100403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.100451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.100641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.100680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.100865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.100903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.101127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.101161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.101326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.101378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.101580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.101613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.101791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.101829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.101981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.102014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.853 [2024-07-26 06:28:05.102155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.853 [2024-07-26 06:28:05.102187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.853 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.102352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.102402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.102581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.102614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.102736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.102768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.102932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.102964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.103178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.103225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.103400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.103435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.103622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.103659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.103805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.103843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.104021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.104064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.104198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.104231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.104428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.104487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.104740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.104794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.105009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.105041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.105182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.105215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.105382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.105413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.105565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.105596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.105722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.105753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.105885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.105916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.106051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.106088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.106253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.106286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.106467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.106499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.106651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.106686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.106906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.106937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.107084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.107116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.107283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.107316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.107499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.107534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.107806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.107860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.108056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.108113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.108296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.108328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.108493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.108526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.108712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.108744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.108878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.108926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.109108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.109141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.854 qpair failed and we were unable to recover it. 00:35:53.854 [2024-07-26 06:28:05.109284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.854 [2024-07-26 06:28:05.109316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.109526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.109558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.109749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.109781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.109926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.109974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.110161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.110194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.110359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.110391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.110542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.110578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.110719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.110755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.110931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.110963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.111207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.111239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.111422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.111469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.111639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.111674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.111872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.111927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.112108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.112161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.112319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.112352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.112505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.112559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.112767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.112822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.113003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.113035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.113200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.113233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.113482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.113554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.113808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.113844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.114032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.114107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.114254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.114287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.114439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.114471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.114613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.114645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.114806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.114840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.115038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.115076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.115207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.115239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.115441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.115497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.115690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.115723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.115903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.115935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.116126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.116160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.116293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.116325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.116497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.855 [2024-07-26 06:28:05.116533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.855 qpair failed and we were unable to recover it. 00:35:53.855 [2024-07-26 06:28:05.116706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.116763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.116911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.116943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.117079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.117120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.117302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.117367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.117585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.117619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.117804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.117841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.118018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.118055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.118231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.118263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.118448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.118487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.118725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.118780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.118981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.119013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.119160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.119194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.119398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.119435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.119617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.119650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.119786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.119819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.120031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.120074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.120270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.120303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.120482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.120519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.120741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.120796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.120987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.121019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.121176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.121208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.121385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.121442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.121637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.121671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.121852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.121888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.122078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.122112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.122277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.122309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.122443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.122476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.122676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.122708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.122868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.122900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.123048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.123088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.123250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.123282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.123437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.123469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.123693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.123727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.123862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.856 [2024-07-26 06:28:05.123894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.856 qpair failed and we were unable to recover it. 00:35:53.856 [2024-07-26 06:28:05.124135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.124168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.124299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.124332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.124487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.124519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.124701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.124734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.124901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.124952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.125155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.125187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.125349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.125383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.125540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.125576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.125774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.125810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.125968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.126001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.126132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.126165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.126377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.126413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.126569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.126601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.126739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.126790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.126950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.126982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.127147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.127180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.127336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.127369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.127520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.127556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.127713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.127745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.127944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.127980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.128181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.128214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.128399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.128431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.128577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.128613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.128811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.128847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.129028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.129066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.129236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.129269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.129469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.129505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.129708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.129744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.129927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.857 [2024-07-26 06:28:05.129963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.857 qpair failed and we were unable to recover it. 00:35:53.857 [2024-07-26 06:28:05.130121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.130166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.130336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.130369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.130552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.130588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.130746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.130778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.130939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.130971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.131142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.131175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.131314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.131347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.131469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.131501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.131680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.131731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.131914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.131946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.132110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.132142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.132272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.132304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.132521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.132553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.132715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.132747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.132888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.132921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.133082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.133115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.133271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.133303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.133486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.133521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.133672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.133708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.133882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.133914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.134093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.134143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.134329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.134375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.134546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.134581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.134777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.134814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.134983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.135019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.135182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.135215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.135365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.135401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.135605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.135639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.135801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.135833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.135993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.136027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.136186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.136233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.136409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.136443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.136575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.136623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.858 [2024-07-26 06:28:05.136833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.858 [2024-07-26 06:28:05.136890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.858 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.137083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.137116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.137283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.137316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.137480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.137516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.137674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.137706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.137879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.137920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.138110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.138146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.138321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.138355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.138520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.138551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.138739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.138774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.138929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.138961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.139166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.139202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.139379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.139431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.139595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.139630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.139808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.139845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.140046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.140089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.140273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.140305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.140478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.140515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.140717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.140774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.140963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.140995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.141126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.141158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.141319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.141370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.141536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.141568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.141732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.141764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.141922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.141954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.142084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.142120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.142252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.142284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.142523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.142556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.142714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.142747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.142959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.142995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.143196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.143229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.143366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.143398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.143606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.143642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.143855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.143891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.859 qpair failed and we were unable to recover it. 00:35:53.859 [2024-07-26 06:28:05.144093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.859 [2024-07-26 06:28:05.144134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.144342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.144378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.144582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.144616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.144762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.144794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.144930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.144964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.145131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.145184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.145367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.145400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.145610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.145647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.145857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.145901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.146069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.146106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.146242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.146274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.146438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.146479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.146662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.146695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.146832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.146865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.147050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.147139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.147287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.147319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.147525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.147561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.147773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.147831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.148043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.148084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.148259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.148293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.148470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.148506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.148684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.148716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.148868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.148905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.149097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.149130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.149265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.149297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.149443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.149476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.149607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.149639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.149808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.149841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.150023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.150065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.150250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.150282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.150468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.150501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.150676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.150712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.150865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.150899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.151089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.860 [2024-07-26 06:28:05.151130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.860 qpair failed and we were unable to recover it. 00:35:53.860 [2024-07-26 06:28:05.151275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.151310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.151528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.151564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.151750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.151782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.151975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.152011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.152206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.152239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.152402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.152434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.152632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.152667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.152847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.152879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.153040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.153081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.153259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.153291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.153482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.153517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.153664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.153696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.153869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.153905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.154054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.154123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.154260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.154293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.154451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.154487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.154654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.154690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.154873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.154909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.155101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.155150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.155345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.155397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.155583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.155617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.155803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.155835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.156057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.156101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.156259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.156291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.156420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.156471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.156643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.156676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.156841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.156874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.157022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.157066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.157223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.157255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.157384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.157416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.157602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.157637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.157853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.157890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.158082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.158115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.861 [2024-07-26 06:28:05.158306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.861 [2024-07-26 06:28:05.158342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.861 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.158505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.158559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.158766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.158798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.158936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.158968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.159158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.159195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.159361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.159393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.159521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.159572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.159744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.159779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.159940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.159972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.160164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.160196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.160331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.160381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.160576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.160609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.160788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.160823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.160968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.161003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.161158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.161200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.161363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.161412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.161587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.161622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.161778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.161810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.161956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.161988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.162158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.162190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.162353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.162385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.162536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.162568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.162713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.162745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.162930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.162962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.163101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.163140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.163272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.163305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.163468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.163500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.163645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.163680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.163857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.163892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.862 [2024-07-26 06:28:05.164074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.862 [2024-07-26 06:28:05.164107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.862 qpair failed and we were unable to recover it. 00:35:53.863 [2024-07-26 06:28:05.164301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.863 [2024-07-26 06:28:05.164338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.863 qpair failed and we were unable to recover it. 00:35:53.863 [2024-07-26 06:28:05.164501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.863 [2024-07-26 06:28:05.164560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.863 qpair failed and we were unable to recover it. 00:35:53.863 [2024-07-26 06:28:05.164744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.863 [2024-07-26 06:28:05.164775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.863 qpair failed and we were unable to recover it. 00:35:53.863 [2024-07-26 06:28:05.164919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.863 [2024-07-26 06:28:05.164955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:53.863 qpair failed and we were unable to recover it. 00:35:54.143 [2024-07-26 06:28:05.165162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.143 [2024-07-26 06:28:05.165212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.143 qpair failed and we were unable to recover it. 00:35:54.143 [2024-07-26 06:28:05.165389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.143 [2024-07-26 06:28:05.165425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.143 qpair failed and we were unable to recover it. 00:35:54.143 [2024-07-26 06:28:05.165612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.143 [2024-07-26 06:28:05.165648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.143 qpair failed and we were unable to recover it. 00:35:54.143 [2024-07-26 06:28:05.165865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.143 [2024-07-26 06:28:05.165924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.143 qpair failed and we were unable to recover it. 00:35:54.143 [2024-07-26 06:28:05.166141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.143 [2024-07-26 06:28:05.166175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.143 qpair failed and we were unable to recover it. 00:35:54.143 [2024-07-26 06:28:05.166318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.143 [2024-07-26 06:28:05.166351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.143 qpair failed and we were unable to recover it. 00:35:54.143 [2024-07-26 06:28:05.166494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.143 [2024-07-26 06:28:05.166549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.143 qpair failed and we were unable to recover it. 00:35:54.143 [2024-07-26 06:28:05.166712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.143 [2024-07-26 06:28:05.166744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.143 qpair failed and we were unable to recover it. 00:35:54.143 [2024-07-26 06:28:05.166885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.143 [2024-07-26 06:28:05.166917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.143 qpair failed and we were unable to recover it. 00:35:54.143 [2024-07-26 06:28:05.167135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.143 [2024-07-26 06:28:05.167182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.143 qpair failed and we were unable to recover it. 00:35:54.143 [2024-07-26 06:28:05.167322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.143 [2024-07-26 06:28:05.167357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.143 qpair failed and we were unable to recover it. 00:35:54.143 [2024-07-26 06:28:05.167559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.167596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.167798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.167838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.168013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.168047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.168199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.168231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.168418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.168452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.168631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.168664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.168807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.168840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.169047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.169097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.169275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.169307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.169459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.169494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.169677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.169709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.169874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.169906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.170099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.170148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.170302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.170334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.170488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.170521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.170676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.170711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.170891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.170923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.171081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.171115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.171322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.171371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.171569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.171606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.171775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.171808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.171986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.172022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.172226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.172259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.172410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.172441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.172600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.172649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.172858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.172890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.173024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.173056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.173238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.173270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.173429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.173461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.173585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.173617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.173822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.173858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.174076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.174113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.174240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.144 [2024-07-26 06:28:05.174272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.144 qpair failed and we were unable to recover it. 00:35:54.144 [2024-07-26 06:28:05.174494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.174529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.174777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.174843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.175056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.175102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.175240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.175272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.175512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.175565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.175734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.175769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.175995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.176032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.176213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.176246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.176439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.176470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.176631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.176667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.176872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.176932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.177125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.177158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.177346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.177379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.177515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.177547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.177707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.177740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.177918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.177965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.178114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.178150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.178328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.178361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.178534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.178571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.178760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.178796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.178982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.179015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.179183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.179216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.179403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.179440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.179624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.179656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.179831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.179867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.180066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.180108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.180305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.180341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.180533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.180565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.180726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.180777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.180975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.181008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.181154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.181191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.181319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.181352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.181508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.181541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.145 [2024-07-26 06:28:05.181714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.145 [2024-07-26 06:28:05.181764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.145 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.181910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.181946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.182125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.182158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.182316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.182353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.182561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.182597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.182799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.182831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.182993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.183029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.183226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.183259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.183428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.183461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.183666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.183703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.183891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.183923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.184084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.184118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.184326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.184362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.184548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.184621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.184772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.184804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.184940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.184972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.185138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.185170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.185327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.185359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.185503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.185538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.185718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.185753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.185942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.185975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.186119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.186152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.186336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.186387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.186593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.186625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.186824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.186859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.186992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.187028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.187179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.187211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.187360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.187410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.187596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.187628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.187760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.187792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.188005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.188046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.188215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.188248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.188388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.188421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.146 qpair failed and we were unable to recover it. 00:35:54.146 [2024-07-26 06:28:05.188577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.146 [2024-07-26 06:28:05.188626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.188774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.188810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.188963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.188995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.189142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.189175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.189338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.189370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.189528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.189560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.189737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.189772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.189956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.189991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.190167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.190200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.190361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.190393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.190545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.190596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.190765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.190797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.190960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.190992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.191221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.191269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.191438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.191472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.191651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.191688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.191876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.191914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.192098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.192142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.192285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.192337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.192516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.192564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.192730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.192763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.192953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.192986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.193162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.193195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.193356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.193389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.193569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.193605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.193764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.193799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.193960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.193992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.194122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.194159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.194344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.194382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.194543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.194576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.194736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.194769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.194951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.194983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.195216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.195249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.147 [2024-07-26 06:28:05.195437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.147 [2024-07-26 06:28:05.195473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.147 qpair failed and we were unable to recover it. 00:35:54.148 [2024-07-26 06:28:05.195696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.148 [2024-07-26 06:28:05.195749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.148 qpair failed and we were unable to recover it. 00:35:54.148 [2024-07-26 06:28:05.195919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.148 [2024-07-26 06:28:05.195951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.148 qpair failed and we were unable to recover it. 00:35:54.148 [2024-07-26 06:28:05.196124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.148 [2024-07-26 06:28:05.196161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.148 qpair failed and we were unable to recover it. 00:35:54.148 [2024-07-26 06:28:05.196371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.148 [2024-07-26 06:28:05.196407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.148 qpair failed and we were unable to recover it. 00:35:54.148 [2024-07-26 06:28:05.196584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.148 [2024-07-26 06:28:05.196616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.148 qpair failed and we were unable to recover it. 00:35:54.148 [2024-07-26 06:28:05.196772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.148 [2024-07-26 06:28:05.196805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.148 qpair failed and we were unable to recover it. 00:35:54.148 [2024-07-26 06:28:05.196987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.148 [2024-07-26 06:28:05.197023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.148 qpair failed and we were unable to recover it. 00:35:54.148 [2024-07-26 06:28:05.197206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.148 [2024-07-26 06:28:05.197239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.148 qpair failed and we were unable to recover it. 00:35:54.148 [2024-07-26 06:28:05.197424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.148 [2024-07-26 06:28:05.197474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.148 qpair failed and we were unable to recover it. 00:35:54.148 [2024-07-26 06:28:05.197645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.148 [2024-07-26 06:28:05.197682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.148 qpair failed and we were unable to recover it. 00:35:54.148 [2024-07-26 06:28:05.197867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.148 [2024-07-26 06:28:05.197900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.148 qpair failed and we were unable to recover it. 00:35:54.148 [2024-07-26 06:28:05.198041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.148 [2024-07-26 06:28:05.198082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.148 qpair failed and we were unable to recover it. 00:35:54.148 [2024-07-26 06:28:05.198259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.148 [2024-07-26 06:28:05.198291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.148 qpair failed and we were unable to recover it. 00:35:54.148 [2024-07-26 06:28:05.198425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.148 [2024-07-26 06:28:05.198456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.148 qpair failed and we were unable to recover it. 00:35:54.148 [2024-07-26 06:28:05.198613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.148 [2024-07-26 06:28:05.198645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.148 qpair failed and we were unable to recover it. 00:35:54.148 [2024-07-26 06:28:05.198783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.148 [2024-07-26 06:28:05.198815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.148 qpair failed and we were unable to recover it. 00:35:54.148 [2024-07-26 06:28:05.198947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.148 [2024-07-26 06:28:05.198979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.148 qpair failed and we were unable to recover it. 00:35:54.148 [2024-07-26 06:28:05.199141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.148 [2024-07-26 06:28:05.199174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.148 qpair failed and we were unable to recover it. 00:35:54.148 [2024-07-26 06:28:05.199305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.148 [2024-07-26 06:28:05.199341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.148 qpair failed and we were unable to recover it. 00:35:54.148 [2024-07-26 06:28:05.199487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.148 [2024-07-26 06:28:05.199521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.148 qpair failed and we were unable to recover it. 00:35:54.148 [2024-07-26 06:28:05.199745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.148 [2024-07-26 06:28:05.199781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.148 qpair failed and we were unable to recover it. 00:35:54.148 [2024-07-26 06:28:05.199964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.148 [2024-07-26 06:28:05.200000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.148 qpair failed and we were unable to recover it. 00:35:54.148 [2024-07-26 06:28:05.200202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.148 [2024-07-26 06:28:05.200235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.148 qpair failed and we were unable to recover it. 00:35:54.148 [2024-07-26 06:28:05.200447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.148 [2024-07-26 06:28:05.200483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.148 qpair failed and we were unable to recover it. 00:35:54.148 [2024-07-26 06:28:05.200753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.148 [2024-07-26 06:28:05.200810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.148 qpair failed and we were unable to recover it. 00:35:54.148 [2024-07-26 06:28:05.200995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.148 [2024-07-26 06:28:05.201027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.201212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.201245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.201432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.201469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.201654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.201686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.201881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.201917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.202117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.202154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.202316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.202348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.202490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.202522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.202662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.202715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.202924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.202957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.203115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.203152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.203356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.203408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.203602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.203637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.203790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.203827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.204010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.204045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.204221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.204255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.204438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.204474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.204648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.204685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.204864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.204896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.205048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.205091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.205287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.205320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.205507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.205540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.205689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.205724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.205959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.206011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.206236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.206270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.206434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.206472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.206675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.206733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.206889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.206921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.207046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.207114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.207320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.207355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.207598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.207631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.149 [2024-07-26 06:28:05.207808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.149 [2024-07-26 06:28:05.207840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.149 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.207970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.208002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.208228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.208277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.208447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.208479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.208706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.208742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.208949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.208980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.209133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.209172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.209369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.209420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.209611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.209645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.209817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.209854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.210050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.210096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.210251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.210283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.210469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.210521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.210704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.210736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.210869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.210902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.211048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.211096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.211269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.211315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.211486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.211526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.211686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.211724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.211928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.211961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.212131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.212164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.212347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.212383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.212548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.212580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.212765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.212798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.212973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.213009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.213192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.213225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.213378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.213411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.213565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.213602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.213796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.213828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.213993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.214025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.214205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.214237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.214422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.214474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.214661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.214695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.150 [2024-07-26 06:28:05.214876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.150 [2024-07-26 06:28:05.214913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.150 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.215068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.215105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.215304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.215337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.215528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.215575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.215876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.215936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.216128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.216161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.216324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.216357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.216601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.216633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.216821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.216854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.217003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.217039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.217202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.217234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.217373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.217406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.217566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.217598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.217912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.217972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.218150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.218184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.218365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.218400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.218581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.218617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.218804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.218836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.218978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.219012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.219209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.219242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.219400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.219433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.219654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.219690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.219847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.219883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.220085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.220131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.220334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.220375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.220543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.220578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.220759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.220791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.220962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.220998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.221163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.221196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.221357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.221389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.221603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.221639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.221854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.221886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.222024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.222057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.222195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.151 [2024-07-26 06:28:05.222227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.151 qpair failed and we were unable to recover it. 00:35:54.151 [2024-07-26 06:28:05.222374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.222410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.222618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.222650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.222810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.222846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.222990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.223026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.223222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.223255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.223455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.223490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.223669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.223706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.223892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.223924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.224074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.224125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.224286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.224318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.224548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.224580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.224771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.224807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.224957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.225005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.225148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.225180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.225343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.225376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.225524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.225559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.225737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.225770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.225913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.225964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.226169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.226206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.226367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.226399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.226583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.226620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.226807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.226843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.227027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.227064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.227264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.227296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.227439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.227475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.227653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.227685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.227825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.227862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.228022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.228057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.228246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.228278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.228438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.228473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.228683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.228753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.228924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.228957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.152 [2024-07-26 06:28:05.229124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.152 [2024-07-26 06:28:05.229175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.152 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.229326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.229361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.229543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.229576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.229768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.229800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.229958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.229995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.230206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.230239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.230379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.230414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.230624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.230659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.230814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.230846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.230975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.231007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.231193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.231227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.231397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.231429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.231639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.231674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.231823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.231859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.232043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.232083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.232282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.232314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.232467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.232503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.232678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.232710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.232867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.232900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.233081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.233117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.233321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.233353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.233542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.233589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.233814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.233872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.234066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.234098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.234292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.234325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.234561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.234593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.234723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.234756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.234924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.234974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.235173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.235210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.235384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.235417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.235558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.235595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.235806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.235838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.236023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.236056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.236271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.153 [2024-07-26 06:28:05.236307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.153 qpair failed and we were unable to recover it. 00:35:54.153 [2024-07-26 06:28:05.236476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.154 [2024-07-26 06:28:05.236508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.154 qpair failed and we were unable to recover it. 00:35:54.154 [2024-07-26 06:28:05.236676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.154 [2024-07-26 06:28:05.236710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.154 qpair failed and we were unable to recover it. 00:35:54.154 [2024-07-26 06:28:05.236903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.154 [2024-07-26 06:28:05.236936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.154 qpair failed and we were unable to recover it. 00:35:54.154 [2024-07-26 06:28:05.237093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.154 [2024-07-26 06:28:05.237126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.154 qpair failed and we were unable to recover it. 00:35:54.154 [2024-07-26 06:28:05.237284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.154 [2024-07-26 06:28:05.237320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.154 qpair failed and we were unable to recover it. 00:35:54.154 [2024-07-26 06:28:05.237472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.154 [2024-07-26 06:28:05.237504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.154 qpair failed and we were unable to recover it. 00:35:54.154 [2024-07-26 06:28:05.237693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.154 [2024-07-26 06:28:05.237727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.154 qpair failed and we were unable to recover it. 00:35:54.154 [2024-07-26 06:28:05.237900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.154 [2024-07-26 06:28:05.237933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.154 qpair failed and we were unable to recover it. 00:35:54.154 [2024-07-26 06:28:05.238137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.154 [2024-07-26 06:28:05.238173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.154 qpair failed and we were unable to recover it. 00:35:54.154 [2024-07-26 06:28:05.238371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.154 [2024-07-26 06:28:05.238404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.154 qpair failed and we were unable to recover it. 00:35:54.154 [2024-07-26 06:28:05.238564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.154 [2024-07-26 06:28:05.238597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.154 qpair failed and we were unable to recover it. 00:35:54.154 [2024-07-26 06:28:05.238779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.154 [2024-07-26 06:28:05.238811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.154 qpair failed and we were unable to recover it. 00:35:54.154 [2024-07-26 06:28:05.238988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.154 [2024-07-26 06:28:05.239024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.154 qpair failed and we were unable to recover it. 00:35:54.154 [2024-07-26 06:28:05.239241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.154 [2024-07-26 06:28:05.239274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.154 qpair failed and we were unable to recover it. 00:35:54.154 [2024-07-26 06:28:05.239436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.154 [2024-07-26 06:28:05.239469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.154 qpair failed and we were unable to recover it. 00:35:54.154 [2024-07-26 06:28:05.239608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.154 [2024-07-26 06:28:05.239640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.154 qpair failed and we were unable to recover it. 00:35:54.154 [2024-07-26 06:28:05.239826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.154 [2024-07-26 06:28:05.239859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.154 qpair failed and we were unable to recover it. 00:35:54.154 [2024-07-26 06:28:05.240037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.154 [2024-07-26 06:28:05.240080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.154 qpair failed and we were unable to recover it. 00:35:54.154 [2024-07-26 06:28:05.240257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.154 [2024-07-26 06:28:05.240294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.154 qpair failed and we were unable to recover it. 00:35:54.154 [2024-07-26 06:28:05.240505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.154 [2024-07-26 06:28:05.240537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.154 qpair failed and we were unable to recover it. 00:35:54.154 [2024-07-26 06:28:05.240686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.154 [2024-07-26 06:28:05.240722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.154 qpair failed and we were unable to recover it. 00:35:54.154 [2024-07-26 06:28:05.240918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.154 [2024-07-26 06:28:05.240950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.154 qpair failed and we were unable to recover it. 00:35:54.154 [2024-07-26 06:28:05.241134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.154 [2024-07-26 06:28:05.241167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.154 qpair failed and we were unable to recover it. 00:35:54.154 [2024-07-26 06:28:05.241345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.154 [2024-07-26 06:28:05.241381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.154 qpair failed and we were unable to recover it. 00:35:54.154 [2024-07-26 06:28:05.241541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.154 [2024-07-26 06:28:05.241574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.154 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.241766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.241798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.241978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.242015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.242178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.242210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.242372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.242404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.242585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.242621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.242820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.242855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.243018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.243049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.243251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.243302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.243450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.243485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.243650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.243682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.243813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.243845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.243999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.244031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.244203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.244235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.244387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.244423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.244607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.244639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.244795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.244827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.245045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.245084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.245270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.245306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.245506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.245539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.245678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.245714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.245876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.245925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.246085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.246118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.246284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.246316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.246496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.246532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.246721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.246753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.246931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.246967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.247134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.247181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.247366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.247398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.247562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.247614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.247784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.247820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.247994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.248026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.248168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.248201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.155 [2024-07-26 06:28:05.248361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.155 [2024-07-26 06:28:05.248411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.155 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.248569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.248601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.248785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.248817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.249027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.249069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.249275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.249307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.249485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.249520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.249726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.249761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.249928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.249960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.250166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.250203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.250372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.250404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.250557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.250589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.250765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.250813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.251010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.251042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.251216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.251249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.251411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.251446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.251592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.251628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.251807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.251839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.252048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.252092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.252289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.252325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.252504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.252536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.252727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.252778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.252933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.252969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.253173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.253205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.253353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.253389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.253570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.253605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.253777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.253810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.253960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.253995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.254168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.254209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.254396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.254428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.254580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.254616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.254792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.254827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.255010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.255042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.255256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.255292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.255498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.156 [2024-07-26 06:28:05.255531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.156 qpair failed and we were unable to recover it. 00:35:54.156 [2024-07-26 06:28:05.255695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.255727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.255927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.255963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.256104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.256141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.256320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.256352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.256520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.256555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.256725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.256761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.256921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.256953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.257116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.257172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.257347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.257383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.257586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.257618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.257766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.257801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.258002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.258037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.258229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.258261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.258446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.258478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.258674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.258710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.258858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.258890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.259082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.259119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.259335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.259373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.259563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.259596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.259754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.259786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.259966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.260002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.260177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.260211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.260397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.260432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.260603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.260639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.260840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.260881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.261086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.261123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.261318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.261350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.261535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.261567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.261727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.261759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.261890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.261939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.157 [2024-07-26 06:28:05.262118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.157 [2024-07-26 06:28:05.262151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.157 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.262295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.262330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.262503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.262538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.262729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.262765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.262895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.262928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.263086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.263121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.263299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.263331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.263529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.263566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.263745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.263780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.263935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.263968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.264137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.264174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.264342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.264378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.264605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.264638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.264826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.264861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.265028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.265098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.265316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.265348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.265501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.265537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.265718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.265753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.265911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.265943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.266104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.266137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.266274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.266306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.266464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.266496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.266687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.266723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.266869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.266904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.267073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.267106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.267264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.267296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.267486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.267522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.267674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.267706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.267891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.267923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.268099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.268136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.268323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.268357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.268510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.268546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.268750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.268786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.158 [2024-07-26 06:28:05.268938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.158 [2024-07-26 06:28:05.268971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.158 qpair failed and we were unable to recover it. 00:35:54.159 [2024-07-26 06:28:05.269138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.159 [2024-07-26 06:28:05.269175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.159 qpair failed and we were unable to recover it. 00:35:54.159 [2024-07-26 06:28:05.269321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.159 [2024-07-26 06:28:05.269356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.159 qpair failed and we were unable to recover it. 00:35:54.159 [2024-07-26 06:28:05.269508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.159 [2024-07-26 06:28:05.269544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.159 qpair failed and we were unable to recover it. 00:35:54.159 [2024-07-26 06:28:05.269692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.159 [2024-07-26 06:28:05.269724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.159 qpair failed and we were unable to recover it. 00:35:54.159 [2024-07-26 06:28:05.269886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.159 [2024-07-26 06:28:05.269934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.159 qpair failed and we were unable to recover it. 00:35:54.159 [2024-07-26 06:28:05.270135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.159 [2024-07-26 06:28:05.270172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.159 qpair failed and we were unable to recover it. 00:35:54.159 [2024-07-26 06:28:05.270321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.159 [2024-07-26 06:28:05.270357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.159 qpair failed and we were unable to recover it. 00:35:54.159 [2024-07-26 06:28:05.270529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.159 [2024-07-26 06:28:05.270561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.159 qpair failed and we were unable to recover it. 00:35:54.159 [2024-07-26 06:28:05.270692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.159 [2024-07-26 06:28:05.270724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.159 qpair failed and we were unable to recover it. 00:35:54.159 [2024-07-26 06:28:05.270881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.159 [2024-07-26 06:28:05.270933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.159 qpair failed and we were unable to recover it. 00:35:54.159 [2024-07-26 06:28:05.271087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.159 [2024-07-26 06:28:05.271124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.159 qpair failed and we were unable to recover it. 00:35:54.159 [2024-07-26 06:28:05.271308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.159 [2024-07-26 06:28:05.271340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.159 qpair failed and we were unable to recover it. 00:35:54.159 [2024-07-26 06:28:05.271509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.159 [2024-07-26 06:28:05.271545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.159 qpair failed and we were unable to recover it. 00:35:54.159 [2024-07-26 06:28:05.271716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.159 [2024-07-26 06:28:05.271752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.159 qpair failed and we were unable to recover it. 00:35:54.159 [2024-07-26 06:28:05.271940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.159 [2024-07-26 06:28:05.271976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.159 qpair failed and we were unable to recover it. 00:35:54.159 [2024-07-26 06:28:05.272158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.159 [2024-07-26 06:28:05.272191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.159 qpair failed and we were unable to recover it. 00:35:54.159 [2024-07-26 06:28:05.272362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.159 [2024-07-26 06:28:05.272398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.159 qpair failed and we were unable to recover it. 00:35:54.159 [2024-07-26 06:28:05.272565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.159 [2024-07-26 06:28:05.272600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.159 qpair failed and we were unable to recover it. 00:35:54.159 [2024-07-26 06:28:05.272792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.159 [2024-07-26 06:28:05.272825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.159 qpair failed and we were unable to recover it. 00:35:54.159 [2024-07-26 06:28:05.272999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.159 [2024-07-26 06:28:05.273031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.159 qpair failed and we were unable to recover it. 00:35:54.159 [2024-07-26 06:28:05.273219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.159 [2024-07-26 06:28:05.273256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.159 qpair failed and we were unable to recover it. 00:35:54.159 [2024-07-26 06:28:05.273436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.159 [2024-07-26 06:28:05.273473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.159 qpair failed and we were unable to recover it. 00:35:54.159 [2024-07-26 06:28:05.273643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.159 [2024-07-26 06:28:05.273679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.159 qpair failed and we were unable to recover it. 00:35:54.159 [2024-07-26 06:28:05.273890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.159 [2024-07-26 06:28:05.273923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.274107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.274149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.274305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.274341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.274521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.274556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.274711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.274742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.274900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.274932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.275079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.275112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.275328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.275360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.275484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.275516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.275638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.275687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.275863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.275899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.276055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.276099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.276283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.276315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.276492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.276524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.276652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.276684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.276850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.276882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.277040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.277079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.277248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.277284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.277487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.277519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.277710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.277742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.277897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.277928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.278089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.278122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.278277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.278309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.278500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.278536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.278692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.278725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.278849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.278881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.279042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.279104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.279261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.279299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.279471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.279503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.279656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.279688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.279854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.279887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.280047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.280086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.280229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.280261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.160 [2024-07-26 06:28:05.280439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.160 [2024-07-26 06:28:05.280474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.160 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.280627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.280662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.280804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.280840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.281020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.281052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.281190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.281222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.281382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.281415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.281626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.281661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.281857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.281890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.282045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.282086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.282275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.282307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.282472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.282504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.282639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.282671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.282881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.282916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.283129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.283166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.283352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.283384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.283576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.283608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.283748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.283780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.283939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.283971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.284126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.284162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.284319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.284352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.284513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.284545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.284733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.284765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.284940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.284973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.285115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.285149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.285312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.285366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.285516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.285551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.285703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.285739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.285923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.285955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.286123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.286155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.286292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.286325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.286486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.286519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.286651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.286683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.286846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.286887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.161 [2024-07-26 06:28:05.287049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.161 [2024-07-26 06:28:05.287093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.161 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.287222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.287254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.287417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.287449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.287569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.287601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.287775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.287811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.287989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.288025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.288188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.288220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.288375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.288407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.288550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.288582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.288740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.288772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.288901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.288933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.289113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.289150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.289309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.289341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.289503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.289537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.289678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.289710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.289894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.289926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.290084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.290117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.290280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.290316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.290524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.290556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.290698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.290730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.290914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.290946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.291082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.291115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.291275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.291307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.291481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.291516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.291692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.291728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.291910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.291946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.292093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.292125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.292315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.292347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.292473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.292505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.292656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.292691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.292871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.292903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.162 [2024-07-26 06:28:05.293056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.162 [2024-07-26 06:28:05.293097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.162 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.293238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.293270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.293455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.293487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.293625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.293657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.293816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.293848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.294005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.294041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.294191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.294227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.294379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.294412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.294571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.294603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.294725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.294761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.294901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.294933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.295087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.295120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.295272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.295303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.295465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.295497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.295641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.295673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.295827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.295860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.296031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.296069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.296228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.296260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.296391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.296423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.296610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.296643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.296802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.296834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.296961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.296993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.297135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.297167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.297298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.297330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.297505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.297541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.297711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.297747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.297919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.297955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.298107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.298140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.298323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.298355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.298499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.298531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.298695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.298728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.298885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.298916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.299076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.299118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.299248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.163 [2024-07-26 06:28:05.299280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.163 qpair failed and we were unable to recover it. 00:35:54.163 [2024-07-26 06:28:05.299421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.299452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.299601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.299634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.299762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.299798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.299932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.299964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.300155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.300188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.300325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.300357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.300484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.300516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.300669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.300701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.300835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.300867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.301042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.301080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.301237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.301273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.301411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.301447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.301593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.301629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.301778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.301810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.301993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.302025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.302188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.302220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.302423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.302456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.302627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.302658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.302809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.302841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.302998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.303030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.303202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.303235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.303398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.303431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.303587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.303619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.303751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.303783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.303950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.303983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.304120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.304152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.304336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.304368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.304495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.304527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.304687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.304719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.304884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.304916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.305053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.305092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.305241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.305272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.164 [2024-07-26 06:28:05.305452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.164 [2024-07-26 06:28:05.305485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.164 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.305621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.305653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.305779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.305811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.305963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.306014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.306211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.306247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.306399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.306432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.306591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.306623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.306766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.306797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.306939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.306972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.307126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.307159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.307286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.307322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.307477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.307509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.307668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.307700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.307856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.307889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.308021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.308055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.308226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.308262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.308447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.308483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.308625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.308657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.308795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.308827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.308961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.308993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.309172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.309206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.309341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.309373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.309532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.309563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.309688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.309720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.309859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.309892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.310046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.310086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.310246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.310278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.310415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.310447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.310664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.310699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.310881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.310913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.311038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.311088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.311245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.311277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.311421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.165 [2024-07-26 06:28:05.311453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.165 qpair failed and we were unable to recover it. 00:35:54.165 [2024-07-26 06:28:05.311587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.311619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.311752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.311803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.311956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.311988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.312131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.312164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.312328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.312360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.312487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.312519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.312670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.312702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.312868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.312916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.313077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.313110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.313266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.313299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.313432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.313464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.313622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.313655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.313812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.313844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.314005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.314037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.314196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.314248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.314426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.314462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.314633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.314666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.314820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.314856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.314982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.315014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.315188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.315221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.315374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.315406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.315584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.315620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.315825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.315860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.316009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.316045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.316260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.316293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.316424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.316455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.316641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.316674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.316852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.316884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.317004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.317036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.317213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.317245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.317380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.317413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.317570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.317602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.317762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.317794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.317964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.317999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.318193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.318252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.318410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.318446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.318654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.318689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.166 [2024-07-26 06:28:05.318835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.166 [2024-07-26 06:28:05.318870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.166 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.319025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.319064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.319207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.319240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.319409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.319442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.319602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.319634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.319797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.319829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.319968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.320000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.320188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.320221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.320352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.320400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.320583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.320615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.320755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.320787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.320945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.320977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.321139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.321172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.321333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.321365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.321506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.321537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.321698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.321730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.321901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.321933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.322130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.322162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.322303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.322335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.322470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.322502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.322676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.322712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.322835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.322867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.323034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.323072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.323198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.323230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.323419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.323455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.323619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.323651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.323828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.323863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.324025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.324057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.324196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.324257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.324420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.324453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.324599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.324632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.324763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.324795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.324951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.325002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.325217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.325254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.325433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.325469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.325615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.325647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.325805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.325837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.325994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.326026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.167 qpair failed and we were unable to recover it. 00:35:54.167 [2024-07-26 06:28:05.326173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.167 [2024-07-26 06:28:05.326205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.326336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.326370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.326510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.326542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.326726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.326758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.326923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.326955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.327085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.327118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.327274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.327305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.327487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.327539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.327760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.327796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.327978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.328021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.328183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.328220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.328384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.328416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.328561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.328593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.328792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.328824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.328957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.328989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.329156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.329199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.329351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.329384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.329519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.329558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.329688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.329721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.329927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.329963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.330106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.330164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.330311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.330345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.330507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.330546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.330707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.330745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.330909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.330943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.331107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.331142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.331298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.331332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.331542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.331579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.331736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.331785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.331970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.332003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.332186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.332220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.332404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.332451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.332652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.332691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.332855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.332888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.333041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.333089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.333251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.333283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.333428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.333461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.333640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.168 [2024-07-26 06:28:05.333672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.168 qpair failed and we were unable to recover it. 00:35:54.168 [2024-07-26 06:28:05.333852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.169 [2024-07-26 06:28:05.333888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.169 qpair failed and we were unable to recover it. 00:35:54.169 [2024-07-26 06:28:05.334067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.169 [2024-07-26 06:28:05.334122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.169 qpair failed and we were unable to recover it. 00:35:54.169 [2024-07-26 06:28:05.334263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.169 [2024-07-26 06:28:05.334296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.169 qpair failed and we were unable to recover it. 00:35:54.169 [2024-07-26 06:28:05.334460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.169 [2024-07-26 06:28:05.334493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.169 qpair failed and we were unable to recover it. 00:35:54.169 [2024-07-26 06:28:05.334627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.169 [2024-07-26 06:28:05.334668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.169 qpair failed and we were unable to recover it. 00:35:54.169 [2024-07-26 06:28:05.334810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.169 [2024-07-26 06:28:05.334843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.169 qpair failed and we were unable to recover it. 00:35:54.169 [2024-07-26 06:28:05.334986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.169 [2024-07-26 06:28:05.335018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.169 qpair failed and we were unable to recover it. 00:35:54.169 [2024-07-26 06:28:05.335178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.169 [2024-07-26 06:28:05.335212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.169 qpair failed and we were unable to recover it. 00:35:54.169 [2024-07-26 06:28:05.335374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.169 [2024-07-26 06:28:05.335406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.169 qpair failed and we were unable to recover it. 00:35:54.169 [2024-07-26 06:28:05.335565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.169 [2024-07-26 06:28:05.335597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.169 qpair failed and we were unable to recover it. 00:35:54.169 [2024-07-26 06:28:05.335740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.169 [2024-07-26 06:28:05.335772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.169 qpair failed and we were unable to recover it. 00:35:54.169 [2024-07-26 06:28:05.335950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.169 [2024-07-26 06:28:05.335982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.169 qpair failed and we were unable to recover it. 00:35:54.169 [2024-07-26 06:28:05.336116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.169 [2024-07-26 06:28:05.336149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.169 qpair failed and we were unable to recover it. 00:35:54.169 [2024-07-26 06:28:05.336307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.169 [2024-07-26 06:28:05.336339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.169 qpair failed and we were unable to recover it. 00:35:54.169 [2024-07-26 06:28:05.336478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.169 [2024-07-26 06:28:05.336512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.169 qpair failed and we were unable to recover it. 00:35:54.169 [2024-07-26 06:28:05.336667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.169 [2024-07-26 06:28:05.336699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.169 qpair failed and we were unable to recover it. 00:35:54.169 [2024-07-26 06:28:05.336859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.169 [2024-07-26 06:28:05.336891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.169 qpair failed and we were unable to recover it. 00:35:54.169 [2024-07-26 06:28:05.337026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.169 [2024-07-26 06:28:05.337065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.169 qpair failed and we were unable to recover it. 00:35:54.169 [2024-07-26 06:28:05.337201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.169 [2024-07-26 06:28:05.337233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.169 qpair failed and we were unable to recover it. 00:35:54.169 [2024-07-26 06:28:05.337373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.169 [2024-07-26 06:28:05.337404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.169 qpair failed and we were unable to recover it. 00:35:54.169 [2024-07-26 06:28:05.337555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.169 [2024-07-26 06:28:05.337587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.169 qpair failed and we were unable to recover it. 00:35:54.169 [2024-07-26 06:28:05.337748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.169 [2024-07-26 06:28:05.337780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.169 qpair failed and we were unable to recover it. 00:35:54.169 [2024-07-26 06:28:05.337941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.169 [2024-07-26 06:28:05.337973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.169 qpair failed and we were unable to recover it. 00:35:54.169 [2024-07-26 06:28:05.338118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.169 [2024-07-26 06:28:05.338150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.338329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.338385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.338554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.338589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.338776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.338809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.338996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.339027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.339163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.339195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.339347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.339379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.339542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.339574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.339706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.339738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.339907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.339939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.340073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.340105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.340234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.340266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.340447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.340479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.340667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.340708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.340892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.340928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.341130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.341163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.341322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.341354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.341510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.341542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.341701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.341732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.341857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.341888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.342072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.342104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.342236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.342268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.342404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.342435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.342599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.342631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.342790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.342822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.342955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.342987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.343125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.343158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.170 [2024-07-26 06:28:05.343342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.170 [2024-07-26 06:28:05.343374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.170 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.343518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.343552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.343708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.343757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.343896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.343931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.344121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.344154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.344287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.344319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.344478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.344529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.344699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.344734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.344884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.344920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.345073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.345106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.345261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.345293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.345424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.345457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.345629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.345661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.345866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.345898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.346072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.346125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.346251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.346283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.346432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.346464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.346586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.346618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.346759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.346792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.346946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.347000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.347223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.347258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.347430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.347465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.347626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.347665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.347808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.347841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.348009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.348043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.348222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.348262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.348411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.348444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.348583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.348624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.348775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.348808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.348993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.349026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.349179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.349217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.349374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.349422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.349599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.349638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.349797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.349830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.349996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.350028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.350195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.350228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.350388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.350420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.350569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.171 [2024-07-26 06:28:05.350603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.171 qpair failed and we were unable to recover it. 00:35:54.171 [2024-07-26 06:28:05.350737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.350769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.350902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.350934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.351086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.351135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.351275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.351307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.351436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.351468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.351631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.351665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.351831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.351863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.351987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.352019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.352184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.352216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.352349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.352381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.352569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.352600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.352731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.352763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.352896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.352928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.353102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.353157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.353318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.353350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.353528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.353560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.353719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.353773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.353951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.353986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.354142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.354174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.354308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.354341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.354499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.354531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.354657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.354689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.354807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.354839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.354968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.355000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.355138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.355182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.355345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.355377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.355512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.355544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.355671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.355703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.355860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.355892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.356046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.356090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.356242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.356275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.356464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.356495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.356657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.356690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.356869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.356901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.357040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.357081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.357260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.357292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.357446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.357481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.172 [2024-07-26 06:28:05.357631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.172 [2024-07-26 06:28:05.357666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.172 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.357821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.357856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.358031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.358070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.358208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.358240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.358404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.358437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.358605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.358638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.358786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.358846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.359021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.359082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.359279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.359326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.359494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.359533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.359704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.359738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.359873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.359927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.360127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.360170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.360324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.360357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.360502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.360536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.360697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.360736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.360925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.360961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.361157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.361192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.361381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.361433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.361592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.361635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.361851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.361888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.362068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.362118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.362270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.362302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.362462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.362498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.362670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.362706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.362844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.362880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.363025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.363081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.363269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.363301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.363431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.363463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.363596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.363628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.363787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.363822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.364030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.364070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.364201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.364233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.364369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.364401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.364559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.364591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.364774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.364806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.364963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.364999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.365194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.365227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.173 qpair failed and we were unable to recover it. 00:35:54.173 [2024-07-26 06:28:05.365368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.173 [2024-07-26 06:28:05.365404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.365583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.365617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.365793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.365837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.366023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.366074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.366299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.366333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.366552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.366615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.366793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.366856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.367043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.367082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.367244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.367299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.367506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.367554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.367743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.367802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.368043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.368111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.368293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.368327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.368497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.368555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.368764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.368822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.368972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.369005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.369144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.369178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.369368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.369401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.369577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.369610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.369768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.369804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.369987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.370027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.370251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.370295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.370438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.370473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.370660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.370702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.370848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.370882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.371093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.371146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.371310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.371345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.371485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.371521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.371679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.371712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.371852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.371886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.372038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.372081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.372249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.372283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.174 qpair failed and we were unable to recover it. 00:35:54.174 [2024-07-26 06:28:05.372506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.174 [2024-07-26 06:28:05.372544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.372689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.372727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.372892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.372926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.373104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.373138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.373270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.373304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.373449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.373483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.373670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.373703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.373863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.373904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.374122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.374156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.374337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.374370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.374534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.374573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.374763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.374797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.374951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.374987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.375189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.375227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.375427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.375470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.375641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.375674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.375812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.375845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.376009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.376042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.376191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.376224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.376438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.376478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.376651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.376684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.376872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.376905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.377069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.377106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.377266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.377303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.377516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.377552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.377748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.377781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.377939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.377972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.378130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.378168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.378322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.378358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.378606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.378670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.378866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.378899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.379031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.379069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.379271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.379304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.379471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.379508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.379674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.379707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.379840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.379873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.380028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.380070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.380282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.175 [2024-07-26 06:28:05.380319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.175 qpair failed and we were unable to recover it. 00:35:54.175 [2024-07-26 06:28:05.380487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.380520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.380662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.380695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.380859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.380893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.381078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.381115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.381295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.381327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.381485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.381521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.381723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.381756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.381893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.381937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.382126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.382160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.382299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.382353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.382506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.382543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.382694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.382730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.382886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.382919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.383100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.383138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.383360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.383393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.383576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.383608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.383783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.383815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.383997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.384032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.384211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.384245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.384450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.384486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.384645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.384677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.384838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.384870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.385043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.385092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.385269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.385304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.385459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.385491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.385671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.385733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.385911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.385948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.386120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.386157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.386333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.386366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.386618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.386673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.386862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.386896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.387066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.387105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.387268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.387301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.387484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.387520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.387703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.387740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.387895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.387932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.388142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.388175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.176 [2024-07-26 06:28:05.388323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.176 [2024-07-26 06:28:05.388359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.176 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.388531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.388567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.388733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.388770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.388950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.388982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.389135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.389171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.389381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.389413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.389629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.389703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.389888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.389923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.390089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.390128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.390332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.390369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.390584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.390641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.390852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.390885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.391097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.391135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.391308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.391344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.391563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.391618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.391802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.391836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.391975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.392007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.392177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.392211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.392392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.392429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.392655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.392692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.392873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.392910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.393112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.393145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.393313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.393366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.393517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.393550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.393682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.393732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.393919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.393951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.394104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.394155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.394358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.394391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.394569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.394605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.394789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.394822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.394978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.395027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.395210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.395244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.395385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.395417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.395584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.395616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.395836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.395873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.396026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.396069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.396282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.396318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.396514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.177 [2024-07-26 06:28:05.396550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.177 qpair failed and we were unable to recover it. 00:35:54.177 [2024-07-26 06:28:05.396773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.396809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.396986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.397019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.397176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.397213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.397415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.397451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.397621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.397677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.397826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.397859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.398001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.398034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.398243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.398279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.398482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.398518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.398677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.398709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.398887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.398923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.399174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.399209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.399393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.399440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.399593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.399625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.399767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.399818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.399991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.400027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.400190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.400227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.400438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.400470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.400601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.400634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.400775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.400807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.401017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.401053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.401226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.401260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.401451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.401507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.401716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.401757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.401948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.401982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.402142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.402175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.402306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.402339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.402479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.402512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.402705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.402741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.402943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.402975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.403121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.403158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.403364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.403400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.403609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.403641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.403798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.403830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.404006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.404042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.404254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.404289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.404462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.404498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.178 [2024-07-26 06:28:05.404660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.178 [2024-07-26 06:28:05.404693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.178 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.404820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.404871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.405083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.405120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.405302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.405334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.405492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.405524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.405719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.405781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.405951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.405985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.406125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.406158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.406288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.406320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.406531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.406596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.406806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.406839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.406972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.407004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.407205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.407238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.407474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.407543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.407750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.407785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.407964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.408000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.408199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.408232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.408389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.408425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.408579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.408615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.408787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.408823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.409008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.409040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.409235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.409271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.409450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.409486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.409634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.409671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.409827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.409861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.409996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.410033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.410188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.410225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.410415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.410450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.410638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.410670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.410806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.410856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.411035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.411080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.411266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.179 [2024-07-26 06:28:05.411299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.179 qpair failed and we were unable to recover it. 00:35:54.179 [2024-07-26 06:28:05.411430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.411464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.411626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.411675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.411886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.411922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.412075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.412112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.412267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.412300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.412429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.412479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.412718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.412751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.412913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.412956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.413175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.413208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.413363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.413400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.413586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.413623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.413806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.413839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.414030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.414069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.414289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.414322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.414484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.414517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.414723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.414759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.414921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.414954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.415116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.415166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.415377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.415413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.415594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.415649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.415800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.415832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.416022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.416066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.416249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.416286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.416462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.416498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.416657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.416689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.416872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.416905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.417087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.417120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.417311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.417363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.417545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.417578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.417731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.417767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.417943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.417980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.418165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.418202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.418392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.418424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.418558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.418590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.418757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.418793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.418981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.419017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.419190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.419224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.180 [2024-07-26 06:28:05.419404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.180 [2024-07-26 06:28:05.419455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.180 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.419624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.419660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.419856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.419892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.420064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.420098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.420279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.420316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.420528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.420564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.420738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.420773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.420945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.420977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.421121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.421155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.421294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.421327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.421622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.421680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.421871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.421904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.422123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.422159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.422322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.422356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.422563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.422626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.422780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.422812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.422953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.422985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.423151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.423184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.423319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.423352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.423533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.423566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.423731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.423763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.423970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.424006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.424194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.424230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.424415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.424447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.424651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.424707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.424926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.424958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.425119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.425152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.425314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.425346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.425486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.425518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.425712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.425748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.425915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.425952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.426127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.426161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.426323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.426355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.426537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.426574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.426724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.426772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.426934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.426967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.427146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.427183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.427369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.181 [2024-07-26 06:28:05.427413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.181 qpair failed and we were unable to recover it. 00:35:54.181 [2024-07-26 06:28:05.427757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.427824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.428004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.428037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.428214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.428251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.428444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.428476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.428654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.428690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.428899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.428931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.429135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.429172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.429333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.429369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.429505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.429541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.429695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.429727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.429864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.429915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.430087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.430133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.430317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.430350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.430512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.430545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.430672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.430704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.430903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.430939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.431096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.431134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.431293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.431326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.431529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.431565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.431708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.431745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.431915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.431951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.432138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.432171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.432369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.432405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.432572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.432606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.432790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.432842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.433004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.433046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.433239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.433275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.433490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.433526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.433777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.433813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.434007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.434040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.434238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.434275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.434480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.434513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.434717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.434754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.434927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.434960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.435111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.435145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.435297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.435341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.435516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.435551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.182 [2024-07-26 06:28:05.435732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.182 [2024-07-26 06:28:05.435765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.182 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.435898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.435932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.436113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.436170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.436348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.436386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.436574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.436607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.436741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.436774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.436908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.436941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.437067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.437100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.437261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.437293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.437449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.437485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.437690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.437727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.437902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.437938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.438111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.438144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.438297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.438339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.438558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.438594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.438755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.438788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.438926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.438959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.439141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.439178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.439340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.439379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.439583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.439616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.439744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.439777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.439912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.439946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.440181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.440219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.440403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.440450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.440607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.440639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.440840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.440876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.441013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.441049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.441242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.441279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.441465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.441497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.441737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.441793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.441971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.442006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.442188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.442225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.442397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.442429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.442592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.442643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.442822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.442855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.443019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.443080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.443290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.183 [2024-07-26 06:28:05.443322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.183 qpair failed and we were unable to recover it. 00:35:54.183 [2024-07-26 06:28:05.443508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.443544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.443741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.443774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.443954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.443990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.444177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.444210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.444461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.444517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.444727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.444768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.444976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.445013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.445229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.445261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.445398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.445431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.445597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.445630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.445803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.445838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.446028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.446067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.446258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.446294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.446460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.446496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.446635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.446671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.446874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.446906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.447088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.447128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.447287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.447323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.447539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.447571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.447760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.447792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.447969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.448006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.448181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.448214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.448401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.448433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.448635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.448668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.448797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.448830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.449008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.449069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.449230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.449267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.449453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.449485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.449718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.449775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.449994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.450030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.450197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.450233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.450391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.184 [2024-07-26 06:28:05.450424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.184 qpair failed and we were unable to recover it. 00:35:54.184 [2024-07-26 06:28:05.450591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.450642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.185 [2024-07-26 06:28:05.450798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.450834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.185 [2024-07-26 06:28:05.451013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.451049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.185 [2024-07-26 06:28:05.451227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.451268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.185 [2024-07-26 06:28:05.451473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.451545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.185 [2024-07-26 06:28:05.451699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.451735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.185 [2024-07-26 06:28:05.451912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.451947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.185 [2024-07-26 06:28:05.452115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.452150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.185 [2024-07-26 06:28:05.452398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.452458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.185 [2024-07-26 06:28:05.452655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.452687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.185 [2024-07-26 06:28:05.452848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.452897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.185 [2024-07-26 06:28:05.453065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.453098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.185 [2024-07-26 06:28:05.453249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.453281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.185 [2024-07-26 06:28:05.453489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.453526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.185 [2024-07-26 06:28:05.453655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.453703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.185 [2024-07-26 06:28:05.453858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.453892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.185 [2024-07-26 06:28:05.454074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.454116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.185 [2024-07-26 06:28:05.454299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.454335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.185 [2024-07-26 06:28:05.454476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.454519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.185 [2024-07-26 06:28:05.454738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.454771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.185 [2024-07-26 06:28:05.454908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.454944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.185 [2024-07-26 06:28:05.455153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.455190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.185 [2024-07-26 06:28:05.455358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.455395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.185 [2024-07-26 06:28:05.455552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.455584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.185 [2024-07-26 06:28:05.455742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.455774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.185 [2024-07-26 06:28:05.455936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.455972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.185 [2024-07-26 06:28:05.456176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.185 [2024-07-26 06:28:05.456213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.185 qpair failed and we were unable to recover it. 00:35:54.468 [2024-07-26 06:28:05.456432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.468 [2024-07-26 06:28:05.456465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.468 qpair failed and we were unable to recover it. 00:35:54.468 [2024-07-26 06:28:05.456656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.468 [2024-07-26 06:28:05.456690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.468 qpair failed and we were unable to recover it. 00:35:54.468 [2024-07-26 06:28:05.456859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.468 [2024-07-26 06:28:05.456891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.468 qpair failed and we were unable to recover it. 00:35:54.468 [2024-07-26 06:28:05.457044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.468 [2024-07-26 06:28:05.457102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.468 qpair failed and we were unable to recover it. 00:35:54.468 [2024-07-26 06:28:05.457285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.468 [2024-07-26 06:28:05.457318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.468 qpair failed and we were unable to recover it. 00:35:54.468 [2024-07-26 06:28:05.457505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.468 [2024-07-26 06:28:05.457541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.468 qpair failed and we were unable to recover it. 00:35:54.468 [2024-07-26 06:28:05.457732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.468 [2024-07-26 06:28:05.457765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.468 qpair failed and we were unable to recover it. 00:35:54.468 [2024-07-26 06:28:05.457896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.468 [2024-07-26 06:28:05.457929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.468 qpair failed and we were unable to recover it. 00:35:54.468 [2024-07-26 06:28:05.458065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.468 [2024-07-26 06:28:05.458098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.468 qpair failed and we were unable to recover it. 00:35:54.468 [2024-07-26 06:28:05.458279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.468 [2024-07-26 06:28:05.458315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.468 qpair failed and we were unable to recover it. 00:35:54.468 [2024-07-26 06:28:05.458504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.468 [2024-07-26 06:28:05.458537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.468 qpair failed and we were unable to recover it. 00:35:54.468 [2024-07-26 06:28:05.458735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.468 [2024-07-26 06:28:05.458783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.468 qpair failed and we were unable to recover it. 00:35:54.468 [2024-07-26 06:28:05.458949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.468 [2024-07-26 06:28:05.458981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.468 qpair failed and we were unable to recover it. 00:35:54.468 [2024-07-26 06:28:05.459156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.468 [2024-07-26 06:28:05.459189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.468 qpair failed and we were unable to recover it. 00:35:54.468 [2024-07-26 06:28:05.459357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.468 [2024-07-26 06:28:05.459393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.468 qpair failed and we were unable to recover it. 00:35:54.468 [2024-07-26 06:28:05.459558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.468 [2024-07-26 06:28:05.459595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.468 qpair failed and we were unable to recover it. 00:35:54.468 [2024-07-26 06:28:05.459786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.468 [2024-07-26 06:28:05.459818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.468 qpair failed and we were unable to recover it. 00:35:54.468 [2024-07-26 06:28:05.459974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.468 [2024-07-26 06:28:05.460006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.468 qpair failed and we were unable to recover it. 00:35:54.468 [2024-07-26 06:28:05.460171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.468 [2024-07-26 06:28:05.460204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.468 qpair failed and we were unable to recover it. 00:35:54.468 [2024-07-26 06:28:05.460355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.468 [2024-07-26 06:28:05.460392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.468 qpair failed and we were unable to recover it. 00:35:54.468 [2024-07-26 06:28:05.460611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.468 [2024-07-26 06:28:05.460643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.468 qpair failed and we were unable to recover it. 00:35:54.468 [2024-07-26 06:28:05.460846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.468 [2024-07-26 06:28:05.460881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.468 qpair failed and we were unable to recover it. 00:35:54.468 [2024-07-26 06:28:05.461072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.468 [2024-07-26 06:28:05.461115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.468 qpair failed and we were unable to recover it. 00:35:54.468 [2024-07-26 06:28:05.461293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.461329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.461475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.461507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.461718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.461754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.461908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.461949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.462133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.462167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.462361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.462393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.462609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.462645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.462824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.462860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.463048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.463089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.463247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.463280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.463435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.463467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.463651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.463687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.463874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.463906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.464047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.464087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.464274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.464309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.464480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.464516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.464686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.464722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.464905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.464937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.465074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.465115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.465318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.465355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.465554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.465590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.465743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.465775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.465942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.465974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.466180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.466217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.466368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.466405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.466585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.466618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.466755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.466787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.466917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.466950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.467101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.467136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.467301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.467337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.467509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.467545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.467755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.467791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.467930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.467977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.468124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.468158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.468322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.468373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.468591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.469 [2024-07-26 06:28:05.468624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.469 qpair failed and we were unable to recover it. 00:35:54.469 [2024-07-26 06:28:05.468834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.468870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.469022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.469055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.469255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.469291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.469495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.469532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.469704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.469740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.469946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.469979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.470208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.470246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.470451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.470491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.470638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.470675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.470879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.470911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.471171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.471229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.471390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.471427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.471603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.471640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.471822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.471856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.472013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.472047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.472228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.472263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.472460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.472492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.472654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.472687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.472824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.472856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.473021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.473053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.473231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.473263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.473399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.473431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.473590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.473623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.473765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.473797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.473945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.473977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.474115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.474148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.474340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.474395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.474579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.474616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.474818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.474855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.475029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.475074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.475217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.475268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.475453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.475490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.475651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.475684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.475843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.475875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.476016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.476048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.476296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.476332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.476507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.476543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.470 qpair failed and we were unable to recover it. 00:35:54.470 [2024-07-26 06:28:05.476750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.470 [2024-07-26 06:28:05.476783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.476929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.476962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.477129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.477177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.477363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.477396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.477551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.477583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.477756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.477792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.477949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.477985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.478162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.478202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.478365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.478397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.478557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.478590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.478802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.478843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.479017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.479054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.479253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.479285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.479508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.479543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.479733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.479769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.479956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.479988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.480148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.480181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.480340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.480372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.480561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.480597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.480773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.480810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.481000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.481032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.481204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.481241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.481427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.481464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.481646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.481688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.481855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.481888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.482076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.482123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.482346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.482382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.482555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.482591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.482750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.482782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.482954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.482990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.483241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.483274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.483409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.483441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.483575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.483608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.483764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.483815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.483996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.484032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.484262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.484298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.484481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.484513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.471 [2024-07-26 06:28:05.484776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.471 [2024-07-26 06:28:05.484833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.471 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.485039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.485085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.485266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.485302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.485481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.485513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.485677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.485709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.485846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.485897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.486116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.486150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.486307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.486339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.486482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.486517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.486664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.486700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.486877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.486913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.487105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.487138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.487336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.487373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.487524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.487564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.487721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.487754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.487916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.487948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.488211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.488268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.488481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.488517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.488709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.488745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.488900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.488933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.489126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.489188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.489381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.489413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.489566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.489616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.489768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.489801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.490002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.490037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.490229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.490265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.490441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.490478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.490686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.490718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.490879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.490915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.491075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.491112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.491282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.491314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.491472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.491504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.491716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.491774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.491953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.491989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.492161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.492195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.472 [2024-07-26 06:28:05.492340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.472 [2024-07-26 06:28:05.492373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.472 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.492531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.492563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.492735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.492769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.492928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.492977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.493139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.493171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.493327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.493359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.493492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.493525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.493710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.493745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.493919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.493951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.494136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.494173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.494378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.494413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.494594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.494628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.494812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.494844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.495025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.495071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.495223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.495259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.495451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.495492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.495684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.495716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.495876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.495908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.496074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.496111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.496318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.496354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.496510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.496542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.496697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.496729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.496952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.496988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.497186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.497223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.497398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.497430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.497611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.497647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.497843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.497876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.498012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.498045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.498186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.498218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.498373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.498405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.498567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.498599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.498763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.498796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.473 qpair failed and we were unable to recover it. 00:35:54.473 [2024-07-26 06:28:05.498958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.473 [2024-07-26 06:28:05.498994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.499201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.499234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.499420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.499456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.499648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.499681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.499871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.499903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.500143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.500202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.500357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.500393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.500541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.500591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.500752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.500785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.500988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.501024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.501243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.501276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.501455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.501492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.501650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.501683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.501872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.501904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.502128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.502161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.502333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.502369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.502555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.502587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.502761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.502797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.502959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.502991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.503148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.503181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.503363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.503395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.503565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.503601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.503782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.503819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.503983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.504019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.504180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.504213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.504357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.504391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.504556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.504610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.504815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.504851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.505073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.505125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.505283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.505316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.505472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.505508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.505676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.505712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.505871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.505903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.506108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.506203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.506407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.506444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.506607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.506643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.506803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.506836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.507017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.474 [2024-07-26 06:28:05.507053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.474 qpair failed and we were unable to recover it. 00:35:54.474 [2024-07-26 06:28:05.507257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.507289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.507480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.507513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.507715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.507747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.507931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.507967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.508150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.508187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.508360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.508395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.508574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.508607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.508780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.508813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.509035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.509080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.509265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.509312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.509517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.509549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.509682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.509731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.509908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.509941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.510076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.510109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.510243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.510275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.510477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.510545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.510729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.510765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.510971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.511003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.511168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.511201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.511358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.511390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.511565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.511598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.511760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.511810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.511997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.512033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.512205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.512238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.512425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.512462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.512621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.512654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.512815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.512847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.513029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.513080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.513270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.513303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.513512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.513546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.513708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.513740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.513910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.513945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.514103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.514139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.514307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.514341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.514531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.514562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.514779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.514815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.515004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.475 [2024-07-26 06:28:05.515038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.475 qpair failed and we were unable to recover it. 00:35:54.475 [2024-07-26 06:28:05.515229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.515276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.515500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.515531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.515673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.515705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.515873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.515911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.516077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.516112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.516259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.516307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.516444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.516492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.516662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.516700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.516901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.516937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.517153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.517188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.517341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.517375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.517520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.517552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.517746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.517784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.517940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.517982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.518135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.518170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.518325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.518359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.518494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.518530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.518676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.518712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.518868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.518906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.519072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.519114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.519287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.519322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.519459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.519494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.519658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.519712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.519950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.519993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.520141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.520176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.520315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.520349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.520506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.520540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.520710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.520763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.520954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.520994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.521172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.521205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.521341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.521375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.521542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.521575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.521761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.521795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.521934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.521967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.522131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.522165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.522337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.522371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.522530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.522574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.476 [2024-07-26 06:28:05.522733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.476 [2024-07-26 06:28:05.522773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.476 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.522922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.522954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.523085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.523120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.523279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.523317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.523525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.523558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.523722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.523756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.523940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.523977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.524112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.524146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.524311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.524346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.524478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.524517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.524688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.524720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.524905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.524939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.525097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.525131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.525256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.525289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.525462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.525499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.525681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.525727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.525902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.525935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.526085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.526119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.526305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.526348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.526516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.526553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.526761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.526795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.526957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.526997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.527157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.527191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.527324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.527375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.527564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.527602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.527776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.527809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.527977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.528010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.528152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.528198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.528376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.528413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.528600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.528637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.528813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.477 [2024-07-26 06:28:05.528855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.477 qpair failed and we were unable to recover it. 00:35:54.477 [2024-07-26 06:28:05.529026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.529066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.529235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.529268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.529457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.529494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.529642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.529674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.529813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.529848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.530108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.530144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.530319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.530352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.530509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.530542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.530733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.530772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.530966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.531005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.531201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.531239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.531425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.531473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.531623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.531659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.531871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.531904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.532042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.532088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.532253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.532285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.532436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.532489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.532691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.532730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.532881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.532914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.533048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.533090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.533230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.533263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.533399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.533433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.533620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.533671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.533816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.533852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.534001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.534037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.534251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.534284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.534458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.534491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.534628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.534667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.534830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.534862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.535036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.535097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.535253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.535292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.535524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.535562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.535719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.535766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.535926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.535958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.536099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.536134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.536294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.536328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.536483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.536517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.478 qpair failed and we were unable to recover it. 00:35:54.478 [2024-07-26 06:28:05.536677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.478 [2024-07-26 06:28:05.536710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.536851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.536900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.537069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.537107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.537271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.537307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.537461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.537495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.537667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.537700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.537866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.537898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.538131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.538180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.538354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.538393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.538561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.538596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.538763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.538796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.538967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.539001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.539151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.539195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.539366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.539403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.539565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.539602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.539776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.539844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.540022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.540055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.540230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.540267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.540435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.540468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.540629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.540663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.540838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.540871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.541014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.541047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.541214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.541247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.541447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.541480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.541669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.541702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.541872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.541904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.542052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.542093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.542285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.542319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.542512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.542548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.542707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.542744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.542881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.542914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.543117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.543155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.543310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.543374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.543539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.543577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.543713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.543748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.543913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.543946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.544132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.544166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.544298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.479 [2024-07-26 06:28:05.544332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.479 qpair failed and we were unable to recover it. 00:35:54.479 [2024-07-26 06:28:05.544516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.544553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.544714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.544752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.544957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.544991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.545146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.545180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.545339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.545373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.545547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.545580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.545762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.545795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.545963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.545996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.546134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.546168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.546311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.546344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.546543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.546577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.546713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.546747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.546878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.546911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.547076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.547111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.547279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.547314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.547513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.547549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.547720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.547757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.547971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.548008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.548197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.548232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.548395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.548433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.548570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.548605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.548781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.548818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.549014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.549048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.549215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.549249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.549412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.549446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.549585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.549619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.549810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.549843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.549979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.550013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.550181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.550215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.550358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.550392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.550580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.550613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.550761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.550797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.550959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.550996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.551167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.551205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.551393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.551426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.551571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.551608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.551763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.551797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.480 qpair failed and we were unable to recover it. 00:35:54.480 [2024-07-26 06:28:05.551999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.480 [2024-07-26 06:28:05.552033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.552178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.552213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.552345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.552382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.552516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.552549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.552717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.552753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.552893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.552926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.553094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.553128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.553347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.553385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.553595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.553632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.553818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.553851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.554016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.554049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.554189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.554223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.554436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.554469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.554598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.554635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.554775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.554808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.554962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.554995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.555173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.555210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.555370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.555404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.555576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.555612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.555765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.555807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.555992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.556029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.556245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.556289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.556451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.556490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.556670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.556707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.556870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.556906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.557108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.557146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.557288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.557324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.557463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.557496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.557660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.557693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.557870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.557906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.558040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.558078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.558243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.558279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.558424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.558458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.558620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.558653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.558811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.558847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.559049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.559095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.559275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.559308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.481 [2024-07-26 06:28:05.559479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.481 [2024-07-26 06:28:05.559512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.481 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.559674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.559712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.559843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.559893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.560051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.560094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.560263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.560300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.560468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.560501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.560629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.560662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.560840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.560873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.561032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.561075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.561270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.561307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.561444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.561476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.561692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.561728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.561932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.561970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.562168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.562202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.562364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.562397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.562561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.562594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.562723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.562756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.562891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.562925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.563096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.563130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.563276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.563309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.563462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.563494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.563634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.563667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.563827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.563861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.564011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.564048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.564208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.564241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.564406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.564440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.564624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.564657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.564791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.564824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.564989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.565022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.565186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.565222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.565394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.565434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.565645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.482 [2024-07-26 06:28:05.565681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.482 qpair failed and we were unable to recover it. 00:35:54.482 [2024-07-26 06:28:05.565835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.565869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.566026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.566067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.566209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.566242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.566398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.566431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.566583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.566616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.566777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.566810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.566967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.567000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.567179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.567213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.567406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.567442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.567642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.567680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.567843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.567876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.568044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.568084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.568224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.568258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.568423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.568456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.568620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.568671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.568866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.568899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.569064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.569107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.569267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.569300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.569487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.569520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.569710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.569747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.569951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.569991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.570154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.570188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.570328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.570363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.570538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.570572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.570698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.570731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.570891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.570924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.571102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.571137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.571351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.571388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.571577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.571611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.571803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.571836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.571971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.572005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.572158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.572192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.572366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.572399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.572595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.572628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.572759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.572792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.572983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.573016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.573256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.573290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.483 qpair failed and we were unable to recover it. 00:35:54.483 [2024-07-26 06:28:05.573468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.483 [2024-07-26 06:28:05.573506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.573688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.573721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.573881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.573932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.574139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.574172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.574341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.574374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.574534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.574567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.574730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.574762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.574920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.574953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.575139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.575177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.575333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.575370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.575574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.575611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.575796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.575830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.576007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.576050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.576241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.576278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.576434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.576470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.576633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.576669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.576808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.576841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.577034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.577075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.577215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.577249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.577436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.577469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.577630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.577666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.577847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.577884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.578050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.578088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.578245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.578279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.578437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.578475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.578654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.578690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.578879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.578916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.579125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.579159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.579308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.579344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.579519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.579562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.579707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.579745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.579960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.579994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.580218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.580252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.580409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.580442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.580663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.580700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.580888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.580921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.581054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.581091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.581250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.484 [2024-07-26 06:28:05.581301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.484 qpair failed and we were unable to recover it. 00:35:54.484 [2024-07-26 06:28:05.581481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.581518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.581675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.581709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.581916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.581952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.582137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.582170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.582382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.582418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.582577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.582620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.582831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.582867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.583078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.583115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.583261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.583298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.583495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.583527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.583700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.583736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.583916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.583949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.584108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.584159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.584341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.584374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.584524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.584562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.584700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.584749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.584934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.584970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.585151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.585185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.585347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.585380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.585509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.585543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.585715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.585777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.585942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.585977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.586147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.586185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.586333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.586370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.586548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.586584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.586789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.586821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.587002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.587039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.587207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.587244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.587488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.587548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.587733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.587765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.587962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.587998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.588160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.588197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.588414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.588480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.588691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.588725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.588898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.588962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.589138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.589175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.589332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.485 [2024-07-26 06:28:05.589370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.485 qpair failed and we were unable to recover it. 00:35:54.485 [2024-07-26 06:28:05.589584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.589618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.589865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.589924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.590114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.590148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.590310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.590343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.590478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.590511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.590715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.590752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.590902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.590938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.591117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.591154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.591336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.591369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.591524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.591560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.591734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.591770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.591922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.591961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.592115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.592148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.592309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.592362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.592504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.592540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.592684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.592720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.592906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.592939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.593117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.593159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.593380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.593413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.593582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.593614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.593791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.593823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.594001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.594037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.594198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.594234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.594396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.594430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.594611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.594643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.594772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.594805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.594986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.595036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.595250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.595304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.595496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.595532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.595755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.595823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.596006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.596044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.596248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.596285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.596465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.596499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.596628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.596660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.596843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.596894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.597076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.597113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.597292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.597325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.486 [2024-07-26 06:28:05.597468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.486 [2024-07-26 06:28:05.597500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.486 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.597653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.597686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.597825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.597859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.598019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.598051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.598237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.598273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.598442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.598478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.598690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.598747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.598926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.598959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.599137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.599174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.599348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.599384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.599529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.599571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.599773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.599805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.599954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.599990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.600147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.600183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.600384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.600438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.600592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.600628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.600796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.600830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.600971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.601004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.601198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.601236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.601396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.601429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.601565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.601624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.601829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.601865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.602042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.602088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.602267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.602300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.602432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.602464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.602655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.602704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.602883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.602921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.603134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.603168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.603354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.603392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.603571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.603608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.603786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.603824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.603978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.487 [2024-07-26 06:28:05.604011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.487 qpair failed and we were unable to recover it. 00:35:54.487 [2024-07-26 06:28:05.604180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.604216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.604386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.604422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.604597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.604633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.604798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.604830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.604988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.605025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.605193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.605226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.605409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.605444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.605623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.605656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.605826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.605863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.606019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.606056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.606250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.606293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.606476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.606509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.606667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.606700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.606902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.606938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.607120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.607157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.607340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.607374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.607578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.607614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.607792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.607828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.607998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.608034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.608211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.608244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.608416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.608452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.608661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.608697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.608844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.608881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.609052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.609090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.609246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.609283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.609464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.609497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.609655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.609705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.609855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.609905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.610055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.610102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.610253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.610286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.610473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.610509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.610688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.610721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.610900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.610937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.611139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.611176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.611345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.611381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.611562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.488 [2024-07-26 06:28:05.611596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.488 qpair failed and we were unable to recover it. 00:35:54.488 [2024-07-26 06:28:05.611777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.611813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.612013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.612050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.612207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.612243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.612429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.612463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.612592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.612624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.612787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.612836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.612986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.613022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.613231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.613264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.613454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.613511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.613675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.613708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.613841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.613874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.614076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.614109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.614264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.614296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.614435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.614467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.614603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.614635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.614794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.614826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.615004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.615040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.615222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.615258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.615431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.615467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.615650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.615682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.615825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.615861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.616027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.616073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.616237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.616273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.616489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.616521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.616736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.616769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.616943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.616980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.617165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.617199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.617357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.617390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.617625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.617657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.617784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.617816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.617967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.618003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.618176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.618208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.618366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.618415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.618566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.618602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.618773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.618809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.619045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.619099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.619284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.619316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-07-26 06:28:05.619575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.489 [2024-07-26 06:28:05.619608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.619767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.619810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.620005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.620039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.620186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.620219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.620358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.620390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.620534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.620568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.620729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.620761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.620939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.620975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.621172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.621209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.621391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.621426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.621601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.621634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.621818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.621854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.622009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.622042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.622207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.622258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.622432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.622466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.622654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.622727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.622892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.622926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.623087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.623120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.623257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.623290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.623420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.623470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.623678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.623713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.623897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.623930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.624087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.624124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.624284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.624318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.624494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.624530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.624715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.624748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.624922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.624959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.625154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.625188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.625369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.625402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.625600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.625636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.625817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.625849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.626057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.626099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.626314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.626350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.626530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.626563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.626745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.626778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.626929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.626965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-07-26 06:28:05.627118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.490 [2024-07-26 06:28:05.627154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.627325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.627361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.627544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.627577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.627778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.627813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.628018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.628054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.628276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.628308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.628475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.628509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.628670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.628703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.628873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.628909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.629072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.629108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.629288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.629321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.629527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.629583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.629788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.629825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.629980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.630022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.630183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.630216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.630354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.630387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.630604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.630641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.630842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.630874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.631038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.631079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.631266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.631302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.631461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.631494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.631681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.631732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.631900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.631936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.632123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.632156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.632307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.632362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.632557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.632592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.632738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.632774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.632912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.632945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.633094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.633128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-07-26 06:28:05.633346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.491 [2024-07-26 06:28:05.633393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.633575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.633608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.633770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.633802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.633983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.634019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.634201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.634237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.634414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.634446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.634705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.634761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.634933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.634969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.635146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.635182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.635370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.635403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.635628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.635690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.635898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.635934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.636091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.636125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.636257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.636290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.636444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.636493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.636663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.636699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.636865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.636901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.637057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.637097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.637277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.637313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.637485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.637521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.637706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.637742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.637943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.637975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.638242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.638301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.638500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.638536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.638694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.638730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.638887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.638920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.639055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.639112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.639314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.639350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.639531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.639568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.639771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.639803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.639971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.640007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.640192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.640225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.640397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.640433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.640581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.640613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.640819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.640855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.641052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.641092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.641224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.641256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.641415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.492 [2024-07-26 06:28:05.641451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.492 qpair failed and we were unable to recover it. 00:35:54.492 [2024-07-26 06:28:05.641633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.641669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.641812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.641848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.642012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.642049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.642246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.642278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.642457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.642493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.642702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.642735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.642866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.642898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.643073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.643106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.643265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.643301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.643482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.643515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.643720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.643757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.643933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.643965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.644102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.644135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.644298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.644348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.644562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.644594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.644782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.644814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.644996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.645032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.645181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.645217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.645417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.645453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.645629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.645661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.645798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.645830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.645991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.646041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.646211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.646248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.646465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.646498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.646695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.646731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.646917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.646954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.647108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.647156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.647332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.647368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.647545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.647582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.647719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.647755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.647937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.647970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.648129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.648163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.648320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.648353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.648505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.648538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.648729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.493 [2024-07-26 06:28:05.648766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.493 qpair failed and we were unable to recover it. 00:35:54.493 [2024-07-26 06:28:05.648949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.648983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.649141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.649174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.649372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.649409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.649605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.649642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.649832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.649868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.650075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.650125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.650288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.650325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.650542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.650578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.650777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.650809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.650967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.650999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.651139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.651172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.651367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.651403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.651567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.651599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.651798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.651834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.651983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.652018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.652185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.652221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.652376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.652410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.652569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.652617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.652794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.652830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.653013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.653046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.653202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.653238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.653376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.653409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.653588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.653624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.653785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.653818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.653972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.654006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.654155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.654207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.654386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.654423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.654592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.654628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.654779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.654811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.654972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.655005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.655135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.655168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.655355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.655391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.655600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.655633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.655817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.655854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.656079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.656130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.656262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.656295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.494 qpair failed and we were unable to recover it. 00:35:54.494 [2024-07-26 06:28:05.656462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.494 [2024-07-26 06:28:05.656494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.656650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.656682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.656857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.656898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.657083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.657126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.657298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.657330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.657512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.657548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.657729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.657762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.657951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.657987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.658162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.658208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.658407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.658443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.658613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.658649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.658829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.658862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.659022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.659054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.659272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.659308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.659693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.659744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.659966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.659998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.660128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.660161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.660361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.660410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.660594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.660631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.660779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.660827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.660999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.661031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.661218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.661254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.661416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.661452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.661592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.661628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.661788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.661821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.661983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.662032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.662193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.662231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.662415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.662449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.662635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.662668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.662852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.662888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.663023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.663067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.663269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.663302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.663494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.663526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.663706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.663742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.663939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.663975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.664149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.664194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.664351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.495 [2024-07-26 06:28:05.664384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.495 qpair failed and we were unable to recover it. 00:35:54.495 [2024-07-26 06:28:05.664509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.664559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.664735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.664772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.664947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.664984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.665199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.665232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.665389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.665425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.665590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.665627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.665813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.665846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.666002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.666034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.666232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.666268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.666443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.666479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.666660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.666692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.666864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.666900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.667151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.667188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.667335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.667371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.667553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.667586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.667774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.667806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.667991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.668027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.668187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.668221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.668404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.668455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.668617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.668649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.668775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.668807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.669005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.669038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.669184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.669236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.669443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.669475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.669638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.669670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.669850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.669885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.670066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.670127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.670305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.670337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.670521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.670557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.670711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.496 [2024-07-26 06:28:05.670747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.496 qpair failed and we were unable to recover it. 00:35:54.496 [2024-07-26 06:28:05.670917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.670952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.671113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.671146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.671278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.671311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.671501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.671538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.671682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.671719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.671882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.671914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.672101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.672137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.672304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.672340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.672526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.672562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.672770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.672802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.673022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.673066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.673247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.673283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.673457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.673493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.673676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.673709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.673860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.673912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.674100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.674137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.674289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.674336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.674550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.674582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.674737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.674774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.674946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.674982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.675182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.675219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.675372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.675408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.675613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.675674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.675883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.675919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.676094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.676131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.676275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.676307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.676481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.676517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.676703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.676736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.676897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.676948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.677162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.677195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.677388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.677452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.677595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.677631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.677808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.677845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.678029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.678072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.678282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.678318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.678532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.678568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.678740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.678776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.497 qpair failed and we were unable to recover it. 00:35:54.497 [2024-07-26 06:28:05.678958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.497 [2024-07-26 06:28:05.678991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.679174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.679211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.679390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.679426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.679600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.679636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.679799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.679831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.680014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.680051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.680244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.680280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.680431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.680467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.680630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.680663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.680818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.680850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.681031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.681076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.681278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.681327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.681525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.681561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.681703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.681740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.681910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.681946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.682176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.682212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.682400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.682444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.682629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.682666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.682838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.682875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.683035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.683075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.683239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.683273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.683406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.683457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.683643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.683677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.683834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.683868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.684032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.684077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.684228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.684265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.684443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.684479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.684670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.684735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.684918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.684951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.685102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.685157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.685329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.685366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.685533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.685568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.685754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.685786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.685973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.686009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.686202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.686236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.686374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.686408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.686546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.686579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.498 [2024-07-26 06:28:05.686747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.498 [2024-07-26 06:28:05.686798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.498 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.686969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.687005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.687153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.687190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.687366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.687398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.687531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.687563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.687729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.687779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.687977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.688013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.688212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.688245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.688440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.688475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.688611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.688647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.688821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.688857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.689010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.689042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.689233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.689269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.689411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.689452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.689637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.689669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.689831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.689864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.690040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.690084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.690262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.690298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.690551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.690608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.690792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.690824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.690962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.690994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.691203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.691240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.691397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.691433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.691615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.691648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.691846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.691882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.692065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.692098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.692236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.692285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.692469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.692507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.692693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.692725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.692894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.692928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.693090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.693151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.693351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.693384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.693521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.693555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.693717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.693766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.693941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.693977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.694162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.694195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.694355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.694388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.694512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.499 [2024-07-26 06:28:05.694545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.499 qpair failed and we were unable to recover it. 00:35:54.499 [2024-07-26 06:28:05.694712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.694764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.694945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.694977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.695163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.695199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.695340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.695376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.695586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.695645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.695809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.695842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.696003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.696052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.696268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.696304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.696561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.696631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.696815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.696847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.697004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.697037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.697232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.697268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.697448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.697484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.697663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.697696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.697901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.697937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.698116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.698153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.698343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.698376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.698557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.698590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.698820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.698880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.699067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.699104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.699313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.699368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.699564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.699600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.699848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.699907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.700113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.700151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.700303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.700339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.700525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.700558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.700690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.700739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.700914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.700951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.701116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.701151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.701306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.701344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.701525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.701561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.701737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.701770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.701933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.701967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.702138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.702172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.702314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.702363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.702541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.702578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.702741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.702775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.500 [2024-07-26 06:28:05.702934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.500 [2024-07-26 06:28:05.702967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.500 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.703145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.703182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.703355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.703392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.703587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.703645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.703799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.703831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.704002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.704038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.704190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.704227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.704397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.704433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.704615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.704647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.704827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.704864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.705035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.705090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.705303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.705335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.705495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.705527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.705711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.705748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.705955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.705988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.706165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.706202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.706388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.706421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.706611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.706685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.706896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.706929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.707118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.707167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.707329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.707363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.707502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.707537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.707725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.707762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.707970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.708006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.708174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.708207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.708340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.708372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.708509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.708542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.708770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.708828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.709075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.709112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.709292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.709325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.709516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.709553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.709713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.501 [2024-07-26 06:28:05.709747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.501 qpair failed and we were unable to recover it. 00:35:54.501 [2024-07-26 06:28:05.709913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.709952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.710117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.710149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.710285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.710335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.710517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.710582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.710788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.710820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.710972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.711008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.711208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.711241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.711373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.711406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.711569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.711601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.711757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.711790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.711968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.712004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.712180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.712216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.712404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.712437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.712592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.712627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.712823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.712857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.713015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.713064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.713200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.713233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.713383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.713415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.713603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.713637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.713795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.713827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.714083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.714133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.714295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.714327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.714484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.714520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.714702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.714735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.714891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.714922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.715126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.715159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.715331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.715367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.715510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.715546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.715697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.715729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.715911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.715947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.716101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.716137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.716312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.716348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.716533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.716565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.716748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.716783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.716986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.717022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.717211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.717248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.717408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.717441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.502 qpair failed and we were unable to recover it. 00:35:54.502 [2024-07-26 06:28:05.717708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.502 [2024-07-26 06:28:05.717765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.717966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.718002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.718190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.718227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.718404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.718441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.718657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.718716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.718891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.718926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.719076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.719113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.719297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.719330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.719467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.719499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.719661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.719712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.719874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.719910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.720120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.720153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.720334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.720369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.720566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.720601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.720774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.720810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.720996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.721028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.721209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.721245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.721437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.721470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.721676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.721713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.721897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.721929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.722145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.722182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.722343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.722379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.722583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.722619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.722832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.722864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.723041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.723088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.723277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.723310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.723507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.723543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.723696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.723728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.723932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.723968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.724141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.724178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.724381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.724417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.724564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.724596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.724844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.724903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.725115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.725148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.725358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.725394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.725559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.725591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.725787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.725834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.503 [2024-07-26 06:28:05.726018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.503 [2024-07-26 06:28:05.726052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.503 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.726259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.726295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.726442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.726475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.726631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.726686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.726841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.726877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.727054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.727109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.727262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.727295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.727467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.727517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.727694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.727730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.727906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.727942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.728124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.728157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.728332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.728369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.728579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.728615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.728772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.728805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.728942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.728976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.729157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.729193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.729381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.729413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.729544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.729576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.729739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.729772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.729977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.730013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.730224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.730256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.730409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.730445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.730625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.730658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.730858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.730894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.731069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.731106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.731322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.731355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.731489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.731521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.731680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.731714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.731858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.731895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.732042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.732088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.732239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.732271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.732409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.732441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.732607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.732640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.732852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.732892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.733097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.733149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.733285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.733318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.733502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.733534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.733740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.504 [2024-07-26 06:28:05.733773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.504 qpair failed and we were unable to recover it. 00:35:54.504 [2024-07-26 06:28:05.733931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.733963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.734224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.734261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.734442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.734479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.734665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.734701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.734895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.734927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.735085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.735121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.735303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.735335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.735497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.735547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.735708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.735740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.735959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.735996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.736209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.736246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.736451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.736483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.736643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.736676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.736853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.736889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.737073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.737109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.737297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.737331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.737514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.737547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.737818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.737875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.738086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.738122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.738297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.738334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.738517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.738549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.738801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.738860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.739015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.739051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.739260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.739296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.739483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.739516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.739649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.739682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.739844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.739893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.740069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.740105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.740293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.740326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.740505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.740540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.740701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.740735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.740874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.740916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.741082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.741115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.741244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.741276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.741482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.741518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.741696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.741732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.741887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.505 [2024-07-26 06:28:05.741919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.505 qpair failed and we were unable to recover it. 00:35:54.505 [2024-07-26 06:28:05.742124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.742160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.742300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.742336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.742480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.742516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.742704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.742736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.742873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.742922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.743108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.743142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.743301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.743333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.743491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.743524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.743741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.743773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.743930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.743962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.744153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.744189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.744350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.744382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.744546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.744580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.744731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.744764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.744946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.744983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.745194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.745226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.745373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.745477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.745645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.745681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.745826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.745861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.746056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.746124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.746285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.746317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.746451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.746483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.746642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.746674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.746862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.746894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.747024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.747087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.747235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.747272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.747410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.747446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.747595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.747627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.747831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.747867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.748016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.748053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.748233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.748269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.506 [2024-07-26 06:28:05.748476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.506 [2024-07-26 06:28:05.748508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.506 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.748747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.748783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.748934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.748970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.749169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.749206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.749379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.749411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.749657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.749712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.749886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.749922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.750108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.750145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.750298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.750331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.750534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.750570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.750737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.750770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.750933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.750965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.751126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.751159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.751376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.751434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.751606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.751642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.751804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.751837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.751996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.752029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.752171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.752222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.752369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.752405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.752580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.752616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.752820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.752852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.753011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.753047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.753233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.753265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.753410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.753446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.753604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.753637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.753813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.753849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.754077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.754115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.754275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.754322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.754504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.754536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.754672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.754704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.754889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.754921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.755140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.755173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.755332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.755365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.755541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.755576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.755730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.755766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.755937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.755973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.756155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.756187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.756330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.507 [2024-07-26 06:28:05.756362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.507 qpair failed and we were unable to recover it. 00:35:54.507 [2024-07-26 06:28:05.756546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.756578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.756765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.756801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.756984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.757016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.757224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.757260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.757443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.757479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.757636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.757669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.757822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.757854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.758032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.758076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.758279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.758315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.758493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.758534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.758714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.758746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.758898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.758934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.759073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.759109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.759274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.759310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.759474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.759506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.759638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.759670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.759889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.759925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.760085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.760119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.760273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.760306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.760454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.760490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.760698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.760730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.760887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.760919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.761092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.761125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.761322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.761371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.761509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.761545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.761688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.761724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.761885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.761917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.762088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.762121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.762275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.762326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.762500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.762536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.762699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.762732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.762908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.762944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.763153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.763186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.763363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.763399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.763586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.763618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.763800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.763832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.764017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.764050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.508 [2024-07-26 06:28:05.764257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.508 [2024-07-26 06:28:05.764293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.508 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.764481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.764513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.764641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.764691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.764879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.764911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.765048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.765088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.765250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.765283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.765421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.765454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.765577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.765609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.765810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.765846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.766008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.766044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.766249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.766281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.766426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.766462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.766631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.766672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.766843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.766875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.767041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.767087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.767260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.767296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.767475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.767548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.767757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.767789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.767948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.767981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.768140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.768192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.768337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.768373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.768548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.768580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.768725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.768761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.768920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.768956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.769132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.769168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.769348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.769381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.769585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.769621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.769822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.769858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.770029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.770082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.770235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.770268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.770431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.770463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.770601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.770634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.770761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.770793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.770984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.771016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.771228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.771265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.771448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.771480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.771649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.771682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.771816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.771848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.772031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.772071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.509 qpair failed and we were unable to recover it. 00:35:54.509 [2024-07-26 06:28:05.772314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.509 [2024-07-26 06:28:05.772347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.772479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.772530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.772709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.772742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.772922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.772954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.773147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.773182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.773338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.773388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.773555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.773589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.773754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.773795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.773922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.773954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.774136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.774179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.774329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.774362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.774537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.774571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.774793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.774831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.774967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.775014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.775194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.775229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.775361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.775402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.775563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.775601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.775777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.775810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.776001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.776039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.776181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.776227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.776367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.776401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.776572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.776608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.776742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.776775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.776939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.776973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.777127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.777169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.777361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.777397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.777564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.777598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.777729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.777772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.777951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.777984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.778114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.778152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.778317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.778349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.778483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.778516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.778724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.778763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.778925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.778958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.779153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.510 [2024-07-26 06:28:05.779188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.510 qpair failed and we were unable to recover it. 00:35:54.510 [2024-07-26 06:28:05.779365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.511 [2024-07-26 06:28:05.779402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.511 qpair failed and we were unable to recover it. 00:35:54.511 [2024-07-26 06:28:05.779569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.791 [2024-07-26 06:28:05.779614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.791 qpair failed and we were unable to recover it. 00:35:54.791 [2024-07-26 06:28:05.779810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.791 [2024-07-26 06:28:05.779848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.791 qpair failed and we were unable to recover it. 00:35:54.791 [2024-07-26 06:28:05.780009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.791 [2024-07-26 06:28:05.780050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.791 qpair failed and we were unable to recover it. 00:35:54.791 [2024-07-26 06:28:05.780214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.791 [2024-07-26 06:28:05.780247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.791 qpair failed and we were unable to recover it. 00:35:54.791 [2024-07-26 06:28:05.780425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.791 [2024-07-26 06:28:05.780459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.791 qpair failed and we were unable to recover it. 00:35:54.791 [2024-07-26 06:28:05.780587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.791 [2024-07-26 06:28:05.780649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.791 qpair failed and we were unable to recover it. 00:35:54.791 [2024-07-26 06:28:05.780833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.791 [2024-07-26 06:28:05.780867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.791 qpair failed and we were unable to recover it. 00:35:54.791 [2024-07-26 06:28:05.781040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.791 [2024-07-26 06:28:05.781084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.791 qpair failed and we were unable to recover it. 00:35:54.791 [2024-07-26 06:28:05.781229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.791 [2024-07-26 06:28:05.781263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.791 qpair failed and we were unable to recover it. 00:35:54.791 [2024-07-26 06:28:05.781437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.791 [2024-07-26 06:28:05.781471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.791 qpair failed and we were unable to recover it. 00:35:54.791 [2024-07-26 06:28:05.781635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.791 [2024-07-26 06:28:05.781669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.791 qpair failed and we were unable to recover it. 00:35:54.791 [2024-07-26 06:28:05.781801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.791 [2024-07-26 06:28:05.781837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.791 qpair failed and we were unable to recover it. 00:35:54.791 [2024-07-26 06:28:05.782007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.791 [2024-07-26 06:28:05.782057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.791 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.782227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.782265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.782419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.782453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.782617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.782650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.782784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.782819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.782997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.783034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.783240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.783288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.783475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.783525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.783706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.783756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.783973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.784040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.784210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.784246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.784415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.784448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.784585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.784644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.784787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.784823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.785010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.785048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.785202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.785235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.785399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.785433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.785621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.785667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.785841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.785878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.786032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.786079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.786265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.786298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.786437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.786469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.786648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.786685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.786825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.786870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.787028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.787085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.787250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.787284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.787415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.787448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.787603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.787636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.787801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.787835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.788018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.788053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.788204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.788237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.788398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.788435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.788601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.788653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.788848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.788884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.789071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.789105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.792 [2024-07-26 06:28:05.789237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.792 [2024-07-26 06:28:05.789273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.792 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.789416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.789450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.789624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.789661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.789891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.789936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.790130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.790177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.790323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.790357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.790539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.790574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.790749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.790786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.790985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.791022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.791245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.791286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.791430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.791468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.791655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.791692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.791865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.791910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.792092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.792133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.792300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.792334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.792511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.792546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.792691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.792724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.792905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.792939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.793075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.793109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.793253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.793287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.793435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.793476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.793709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.793750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.793965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.794020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.794234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.794282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.794470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.794521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.794708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.794773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.795020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.795076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.795276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.795311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.795443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.793 [2024-07-26 06:28:05.795482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.793 qpair failed and we were unable to recover it. 00:35:54.793 [2024-07-26 06:28:05.795662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.795695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.795889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.795938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.796113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.796156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.796318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.796354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.796506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.796539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.796717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.796767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.796915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.796951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.797161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.797196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.797360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.797394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.797676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.797743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.797908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.797948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.798140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.798175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.798337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.798374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.798534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.798572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.798746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.798797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.798959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.798999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.799187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.799221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.799411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.799449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.799609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.799666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.799854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.799896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.800037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.800079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.800253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.800291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.800457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.800491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.800669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.800716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.800922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.800959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.801185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.801219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.801397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.801435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.801677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.801733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.801917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.801955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.802117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.802151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.802309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.802343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.802506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.802547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.802675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.802708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.802905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.802942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.803158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.803193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.803375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.803412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.803562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.794 [2024-07-26 06:28:05.803596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.794 qpair failed and we were unable to recover it. 00:35:54.794 [2024-07-26 06:28:05.803779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.803829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.804009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.804049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.804224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.804258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.804436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.804469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.804623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.804662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.804891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.804934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.805122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.805156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.805323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.805363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.805520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.805556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.805734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.805772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.805982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.806021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.806213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.806246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.806436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.806473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.806681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.806719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.806873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.806922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.807083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.807117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.807282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.807320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.807510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.807543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.807699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.807736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.807913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.807951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.808133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.808173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.808334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.808367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.808554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.808588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.808720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.808757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.808913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.808950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.809118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.809153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.809313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.809346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.809484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.809517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.809688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.809733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.809919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.809953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.810088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.810123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.810307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.810345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.810512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.810544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.810721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.810755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.810939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.810972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.811126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.811168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.795 [2024-07-26 06:28:05.811304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.795 [2024-07-26 06:28:05.811342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.795 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.811504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.811538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.811728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.811762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.811899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.811946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.812117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.812151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.812304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.812354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.812537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.812584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.812734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.812771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.812927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.812963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.813137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.813171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.813314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.813347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.813486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.813520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.813753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.813793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.813924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.813957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.814087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.814121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.814288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.814322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.814474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.814508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.814712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.814750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.814926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.814968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.815181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.815219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.815385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.815432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.815567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.815608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.815780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.815813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.815959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.815992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.816163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.816197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.816355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.816388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.816559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.816593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.816731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.816790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.816997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.817039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.796 [2024-07-26 06:28:05.817211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.796 [2024-07-26 06:28:05.817245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.796 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.817410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.817451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.817596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.817630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.817771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.817805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.817943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.817976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.818150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.818190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.818349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.818400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.818563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.818598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.818790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.818824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.818980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.819014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.819198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.819242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.819383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.819416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.819580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.819613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.819834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.819873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.820016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.820049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.820193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.820231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.820386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.820420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.820597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.820631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.820794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.820827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.821009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.821045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.821229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.821262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.821452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.821487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.821649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.821682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.821817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.821852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.822045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.822089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.822229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.822268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.822410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.822443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.822611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.822654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.822814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.822848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.822998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.823035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.823299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.823333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.823468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.823502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.823686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.823726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.823872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.823905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.824073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.824107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.824245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.824289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.824448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.824480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.824633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.797 [2024-07-26 06:28:05.824665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.797 qpair failed and we were unable to recover it. 00:35:54.797 [2024-07-26 06:28:05.824798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.824832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.824986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.825029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.825218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.825251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.825436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.825473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.825695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.825731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.825859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.825891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.826030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.826073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.826241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.826277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.826482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.826517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.826725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.826758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.826939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.826972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.827113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.827153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.827317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.827356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.827519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.827553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.827691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.827724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.827905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.827939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.828115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.828155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.828285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.828318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.828469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.828502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.828665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.828698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.828856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.828890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.829077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.829111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.829244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.829284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.829455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.829489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.829652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.829685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.829839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.829874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.830043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.830084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.830249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.830283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.830487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.830541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.830729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.830763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.830892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.830925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.831074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.831108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.831246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.831279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.831411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.831442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.831612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.831648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.831795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.831831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.831981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.832016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.798 [2024-07-26 06:28:05.832173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.798 [2024-07-26 06:28:05.832205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.798 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.832367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.832400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.832535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.832567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.832700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.832732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.832864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.832896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.833069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.833102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.833233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.833265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.833424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.833473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.833624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.833656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.833795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.833827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.833963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.833994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.834117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.834150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.834285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.834329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.834481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.834513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.834696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.834732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.834903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.834939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.835125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.835159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.835295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.835328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.835488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.835520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.835679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.835728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.835906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.835942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.836106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.836138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.836278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.836310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.836473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.836505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.836636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.836668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.836796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.836828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.836963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.836995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.837205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.837238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.837395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.837427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.837586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.837618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.837784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.837816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.837946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.837983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.838115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.838148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.838288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.838320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.838477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.838513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.838689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.838725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.838902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.838934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.799 [2024-07-26 06:28:05.839078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.799 [2024-07-26 06:28:05.839120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.799 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.839309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.839341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.839474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.839525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.839711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.839743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.839933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.839965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.840099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.840132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.840291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.840323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.840483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.840515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.840653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.840685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.840828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.840860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.841011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.841047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.841202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.841235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.841359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.841392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.841517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.841549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.841713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.841745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.841922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.841957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.842119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.842153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.842313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.842345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.842504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.842536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.842691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.842723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.842875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.842907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.843086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.843119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.843277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.843310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.843465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.843497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.843668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.843701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.843828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.843860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.844011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.844043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.844233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.844265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.844407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.844443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.844630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.844662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.844798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.844830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.844961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.844994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.845150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.845183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.845319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.845351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.845514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.845551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.845732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.845765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.845920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.845956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.846102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.800 [2024-07-26 06:28:05.846138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.800 qpair failed and we were unable to recover it. 00:35:54.800 [2024-07-26 06:28:05.846314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.846351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.846513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.846556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.846714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.846746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.846886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.846919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.847086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.847123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.847301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.847334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.847489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.847522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.847680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.847712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.847866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.847898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.848070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.848103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.848333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.848365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.848508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.848540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.848670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.848703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.848888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.848920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.849078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.849111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.849244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.849276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.849410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.849442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.849632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.849665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.849821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.849853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.850001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.850034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.850198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.850249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.850410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.850446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.850616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.850665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.850811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.850864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.851043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.851100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.851319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.851352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.851508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.851548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.851694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.851730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.851900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.851932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.852091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.852125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.852262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.852300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.852470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.801 [2024-07-26 06:28:05.852503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.801 qpair failed and we were unable to recover it. 00:35:54.801 [2024-07-26 06:28:05.852668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.852701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.852859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.852896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.853088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.853122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.853268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.853301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.853468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.853518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.853717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.853750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.853934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.853971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.854133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.854174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.854326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.854359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.854524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.854558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.854688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.854726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.854880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.854913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.855067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.855102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.855264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.855310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.855457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.855490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.855680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.855713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.855870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.855909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.856121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.856154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.856301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.856335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.856551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.856595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.856772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.856808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.856964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.856998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.857163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.857211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.857390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.857423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.857585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.857622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.857803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.857837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.858038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.858087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.858270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.858302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.858468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.858503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.858652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.858684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.858844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.858876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.859085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.859122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.859257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.859292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.859473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.859505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.859709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.859744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.859923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.859958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.860134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.860171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.802 qpair failed and we were unable to recover it. 00:35:54.802 [2024-07-26 06:28:05.860351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.802 [2024-07-26 06:28:05.860385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.860534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.860570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.860719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.860754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.860929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.860964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.861119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.861152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.861281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.861331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.861477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.861513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.861686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.861726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.861893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.861925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.862130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.862166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.862326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.862359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.862512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.862544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.862718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.862751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.862915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.862964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.863112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.863148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.863345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.863379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.863540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.863573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.863741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.863773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.863981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.864017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.864234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.864267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.864402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.864435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.864644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.864680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.864857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.864893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.865071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.865107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.865281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.865313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.865463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.865498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.865681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.865713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.865842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.865874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.866075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.866108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.866263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.866295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.866479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.866511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.866705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.866737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.866920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.866952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.867151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.867188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.867336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.867372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.867516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.867552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.867707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.867740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.867872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.803 [2024-07-26 06:28:05.867905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.803 qpair failed and we were unable to recover it. 00:35:54.803 [2024-07-26 06:28:05.868094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.868130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.868344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.868381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.868557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.868599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.868788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.868824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.869021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.869057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.869235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.869267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.869425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.869458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.869614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.869646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.869772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.869804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.869956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.869993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.870125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.870158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.870357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.870393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.870542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.870577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.870739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.870772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.870965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.870997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.871182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.871219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.871424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.871456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.871615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.871647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.871781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.871813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.871995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.872027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.872222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.872258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.872407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.872443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.872639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.872671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.872851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.872886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.873082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.873119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.873271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.873303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.873463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.873495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.873675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.873710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.873905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.873941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.874142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.874179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.874336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.874367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.874549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.874585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.874816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.874874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.875052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.875096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.875256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.875288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.875445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.875477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.875637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.875674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.804 qpair failed and we were unable to recover it. 00:35:54.804 [2024-07-26 06:28:05.875839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.804 [2024-07-26 06:28:05.875875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.876023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.876055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.876205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.876257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.876460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.876516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.876694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.876730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.876933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.876965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.877127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.877164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.877419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.877488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.877688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.877724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.877918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.877951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.878135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.878177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.878415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.878447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.878604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.878640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.878800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.878832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.878988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.879025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.879189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.879225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.879399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.879436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.879620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.879652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.879852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.879888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.880070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.880107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.880243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.880276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.880416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.880449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.880613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.880646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.880816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.880852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.881025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.881067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.881222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.881254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.881395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.881445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.881595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.881631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.881813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.881850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.881996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.882038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.882231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.882267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.882411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.882448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.882598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.882633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.882789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.882822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.805 qpair failed and we were unable to recover it. 00:35:54.805 [2024-07-26 06:28:05.882956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.805 [2024-07-26 06:28:05.882989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.883156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.883207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.883361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.883397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.883583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.883615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.883774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.883806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.883943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.883990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.884153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.884189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.884342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.884375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.884576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.884611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.884786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.884821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.884996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.885032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.885193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.885226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.885384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.885433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.885582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.885618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.885818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.885853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.886038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.886078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.886214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.886246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.886448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.886480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.886612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.886667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.886879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.886911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.887110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.887147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.887386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.887444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.887634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.887667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.887802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.887834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.888012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.888047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.888245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.888278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.888463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.888500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.888701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.888733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.888887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.888923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.889119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.889184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.889320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.889358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.889504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.889537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.889705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.889737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.889876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.889908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.890093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.890145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.890361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.806 [2024-07-26 06:28:05.890393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.806 qpair failed and we were unable to recover it. 00:35:54.806 [2024-07-26 06:28:05.890554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.890589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.890732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.890768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.890948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.890980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.891142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.891175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.891352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.891387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.891607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.891639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.891825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.891857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.892086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.892119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.892247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.892278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.892439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.892471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.892660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.892696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.892845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.892877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.893037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.893102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.893258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.893291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.893456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.893506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.893649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.893682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.893861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.893911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.894108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.894144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.894342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.894378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.894534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.894566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.894716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.894751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.894930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.894966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.895201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.895239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.895379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.895421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.895576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.895627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.895805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.895841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.896024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.896056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.896227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.896259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.896435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.896471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.896762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.896819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.897012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.897044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.897188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.897220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.897350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.897382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.897567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.897599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.897804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.897839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.898019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.898051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.898248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.807 [2024-07-26 06:28:05.898284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.807 qpair failed and we were unable to recover it. 00:35:54.807 [2024-07-26 06:28:05.898462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.898519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.898816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.898873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.899051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.899092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.899232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.899282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.899534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.899592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.899782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.899816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.899977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.900009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.900154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.900187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.900367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.900403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.900612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.900644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.900831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.900863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.901040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.901085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.901244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.901280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.901456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.901492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.901648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.901680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.901857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.901889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.902074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.902110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.902249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.902286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.902457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.902489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.902650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.902700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.902878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.902914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.903121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.903157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.903317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.903350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.903483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.903534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.903712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.903747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.903882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.903922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.904089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.904121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.904281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.904314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.904504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.904540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.904708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.904744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.904972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.905013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.905216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.905248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.905476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.905533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.905713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.905746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.905875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.905908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.906073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.906125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.906301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.906336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.808 qpair failed and we were unable to recover it. 00:35:54.808 [2024-07-26 06:28:05.906514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.808 [2024-07-26 06:28:05.906550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.906728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.906761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.906945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.906982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.907193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.907252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.907427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.907463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.907634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.907666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.907798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.907830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.907987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.908036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.908196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.908232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.908426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.908458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.908614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.908647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.908777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.908809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.908949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.908982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.909142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.909187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.909386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.909422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.909601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.909637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.909840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.909876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.910029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.910069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.910232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.910281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.910421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.910456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.910623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.910659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.910815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.910848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.911008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.911041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.911200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.911233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.911386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.911421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.911574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.911606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.911743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.911776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.911944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.911976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.912188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.912231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.912396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.912428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.912582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.912631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.912778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.912813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.912979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.913014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.913177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.913210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.913341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.913374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.913590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.913649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.913853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.913889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.914154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.809 [2024-07-26 06:28:05.914187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.809 qpair failed and we were unable to recover it. 00:35:54.809 [2024-07-26 06:28:05.914391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.914427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.914645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.914678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.914862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.914894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.915099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.915132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.915312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.915348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.915611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.915670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.915853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.915888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.916093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.916126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.916300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.916336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.916574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.916629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.916798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.916833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.916990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.917022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.917161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.917194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.917350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.917383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.917556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.917592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.917777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.917809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.917988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.918024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.918222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.918259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.918472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.918504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.918635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.918667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.918795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.918828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.918986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.919019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.919163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.919196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.919337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.919370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.919499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.919532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.919660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.919693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.919900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.919936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.920084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.920117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.920242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.920276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.920474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.920506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.920710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.920751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.920936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.810 [2024-07-26 06:28:05.920969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.810 qpair failed and we were unable to recover it. 00:35:54.810 [2024-07-26 06:28:05.921154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.921190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.921362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.921398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.921545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.921581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.921729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.921762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.921921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.921970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.922239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.922298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.922507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.922540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.922723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.922766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.922917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.922954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.923140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.923173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.923308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.923359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.923540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.923573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.923754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.923790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.923957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.923993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.924133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.924171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.924353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.924385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.924522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.924573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.924770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.924806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.924994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.925030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.925194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.925226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.925396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.925432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.925706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.925762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.925943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.925978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.926168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.926201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.926377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.926414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.926654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.926687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.926864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.926901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.927097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.927130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.927283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.927319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.927513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.927545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.927705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.927737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.927900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.927932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.928090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.928139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.928328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.928360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.928520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.928553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.928708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.928740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.928914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.928950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.811 [2024-07-26 06:28:05.929133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.811 [2024-07-26 06:28:05.929165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.811 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.929336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.929373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.929540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.929573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.929751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.929787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.929945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.929979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.930163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.930214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.930388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.930421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.930580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.930613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.930749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.930781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.930964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.930996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.931177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.931210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.931333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.931365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.931580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.931648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.931849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.931885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.932073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.932114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.932280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.932316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.932488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.932524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.932694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.932730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.932912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.932945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.933126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.933163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.933341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.933377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.933522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.933559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.933710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.933743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.933903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.933935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.934139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.934176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.934334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.934367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.934505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.934538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.934696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.934729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.934897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.934933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.935113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.935149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.935352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.935384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.935532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.935567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.935708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.935744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.935954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.935990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.936157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.936201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.936364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.936414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.936590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.936622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.936753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.812 [2024-07-26 06:28:05.936785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.812 qpair failed and we were unable to recover it. 00:35:54.812 [2024-07-26 06:28:05.936969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.937002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.937164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.937200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.937412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.937444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.937575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.937607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.937738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.937770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.937975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.938011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.938223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.938259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.938442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.938475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.938633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.938666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.938815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.938850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.938997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.939033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.939186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.939222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.939428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.939460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.939656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.939692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.939878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.939912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.940098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.940135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.940289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.940322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.940490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.940523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.940660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.940691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.940864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.940900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.941109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.941142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.941298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.941331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.941470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.941520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.941704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.941737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.941868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.941900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.942056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.942094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.942223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.942257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.942391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.942425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.942608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.942640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.942812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.942848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.943023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.943071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.943258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.943290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.943419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.943451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.943586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.943636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.943784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.943820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.944001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.944035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.944213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.944246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.813 qpair failed and we were unable to recover it. 00:35:54.813 [2024-07-26 06:28:05.944433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.813 [2024-07-26 06:28:05.944465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.944606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.944639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.944845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.944881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.945035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.945076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.945260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.945296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.945567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.945623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.945796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.945831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.946018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.946050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.946239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.946276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.946539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.946572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.946775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.946811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.946995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.947027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.947198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.947231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.947406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.947442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.947652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.947688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.947841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.947873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.948021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.948082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.948285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.948321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.948472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.948508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.948684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.948717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.948856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.948888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.949042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.949082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.949281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.949318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.949533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.949575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.949786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.949822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.949995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.950030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.950196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.950229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.950391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.950423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.950625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.950661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.950922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.950979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.951156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.951194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.951366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.951398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.951529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.951562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.951692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.951729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.951889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.951940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.952093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.952126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.952308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.952359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.952547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.814 [2024-07-26 06:28:05.952579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.814 qpair failed and we were unable to recover it. 00:35:54.814 [2024-07-26 06:28:05.952702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.952734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.952888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.952921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.953130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.953166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.953370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.953436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.953590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.953625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.953812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.953845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.954035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.954073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.954209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.954241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.954437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.954469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.954607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.954640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.954811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.954846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.955000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.955037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.955230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.955266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.955420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.955452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.955586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.955633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.955810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.955846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.955992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.956028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.956224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.956257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.956381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.956414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.956597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.956649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.956813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.956846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.956978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.957011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.957177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.957214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.957398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.957430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.957585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.957618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.957778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.957811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.957938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.957986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.958141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.958178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.958327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.958363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.958584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.958617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.958764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.958800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.815 qpair failed and we were unable to recover it. 00:35:54.815 [2024-07-26 06:28:05.958967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.815 [2024-07-26 06:28:05.959003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.959187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.959220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.959382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.959416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.959595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.959631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.959805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.959845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.960006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.960039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.960215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.960248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.960413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.960445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.960698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.960754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.960953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.960989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.961151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.961185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.961385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.961421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.961648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.961684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.961883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.961919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.962107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.962140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.962310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.962346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.962612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.962670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.962861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.962898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.963090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.963132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.963264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.963315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.963499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.963532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.963685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.963717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.963905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.963937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.964068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.964119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.964291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.964326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.964500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.964536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.964687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.964720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.964894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.964930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.965110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.965143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.965330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.965362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.965530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.965562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.965705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.965738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.965897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.965947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.966151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.966187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.966366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.966399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.966583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.966618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.966785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.966821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.966991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.816 [2024-07-26 06:28:05.967027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.816 qpair failed and we were unable to recover it. 00:35:54.816 [2024-07-26 06:28:05.967221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.967254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.967409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.967441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.967594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.967643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.967831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.967863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.968041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.968091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.968248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.968280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.968460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.968501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.968678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.968713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.968893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.968926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.969131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.969167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.969346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.969381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.969578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.969614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.969767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.969801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.969959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.970008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.970184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.970221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.970389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.970425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.970582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.970614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.970748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.970798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.970937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.970974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.971148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.971184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.971381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.971413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.971566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.971617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.971800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.971852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.972065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.972098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.972228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.972260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.972441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.972477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.972622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.972658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.972868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.972900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.973065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.973098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.973224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.973256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.973434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.973485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.973682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.973718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.973861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.973893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.974078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.974115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.974317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.974350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.974522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.974558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.974780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.974812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.817 [2024-07-26 06:28:05.975000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.817 [2024-07-26 06:28:05.975036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.817 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.975185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.975221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.975365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.975401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.975551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.975584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.975741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.975774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.975900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.975951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.976130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.976167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.976347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.976409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.976566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.976602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.976786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.976825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.976982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.977014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.977180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.977213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.977367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.977400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.977555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.977588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.977764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.977800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.978007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.978039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.978199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.978235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.978422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.978454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.978602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.978634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.978767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.978800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.978971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.979007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.979160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.979196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.979348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.979384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.979547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.979579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.979787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.979823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.979985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.980017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.980159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.980192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.980338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.980372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.980543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.980579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.980720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.980755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.980896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.980932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.981111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.981144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.981293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.981330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.981543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.981575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.981703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.981735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.981918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.981950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.982147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.982183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.982388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.818 [2024-07-26 06:28:05.982420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.818 qpair failed and we were unable to recover it. 00:35:54.818 [2024-07-26 06:28:05.982549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.982582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.982733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.982766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.982937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.982973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.983128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.983161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.983312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.983368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.983572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.983604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.983795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.983828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.983980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.984012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.984210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.984242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.984406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.984438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.984594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.984627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.984815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.984856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.984997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.985032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.985237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.985269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.985450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.985485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.985678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.985738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.985903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.985939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.986117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.986150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.986294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.986329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.986538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.986570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.986698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.986730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.986862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.986895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.987048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.987086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.987272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.987308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.987461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.987498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.987660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.987692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.987853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.987886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.988043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.988098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.988291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.988322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.988511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.988543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.988726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.988761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.988928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.988964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.819 qpair failed and we were unable to recover it. 00:35:54.819 [2024-07-26 06:28:05.989171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.819 [2024-07-26 06:28:05.989209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.989367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.989409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.989538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.989571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.989707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.989740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.989897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.989932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.990111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.990144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.990303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.990339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.990513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.990580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.990747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.990783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.990955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.990987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.991164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.991200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.991419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.991451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.991609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.991644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.991852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.991884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.992103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.992137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.992276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.992308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.992440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.992473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.992658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.992691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.992876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.992912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.993055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.993105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.993279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.993315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.993503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.993536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.993709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.993746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.993953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.993989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.994164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.994201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.994381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.994413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.994586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.994622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.994804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.994837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.995000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.995051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.995246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.995278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.995412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.995462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.995606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.995642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.995808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.995844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.996039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.996084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.996295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.996331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.996597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.996633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.996778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.996814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.820 [2024-07-26 06:28:05.996975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.820 [2024-07-26 06:28:05.997007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.820 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:05.997171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:05.997222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:05.997379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:05.997414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:05.997555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:05.997592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:05.997777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:05.997809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:05.997970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:05.998002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:05.998139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:05.998189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:05.998398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:05.998434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:05.998620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:05.998652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:05.998814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:05.998846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:05.999007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:05.999044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:05.999204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:05.999240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:05.999421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:05.999453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:05.999636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:05.999671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:05.999843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:05.999879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:06.000055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:06.000105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:06.000292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:06.000324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:06.000500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:06.000535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:06.000803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:06.000859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:06.001049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:06.001090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:06.001251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:06.001284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:06.001465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:06.001501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:06.001653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:06.001694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:06.001873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:06.001908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:06.002083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:06.002134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:06.002270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:06.002303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:06.002527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:06.002588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:06.002766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:06.002802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:06.002951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:06.002994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:06.003206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:06.003243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:06.003422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:06.003483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:06.003671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:06.003704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:06.003861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:06.003894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:06.004082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:06.004119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:06.004309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:06.004343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:06.004506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:06.004557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:06.004744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:06.004776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:06.004958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:06.004994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.821 qpair failed and we were unable to recover it. 00:35:54.821 [2024-07-26 06:28:06.005144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.821 [2024-07-26 06:28:06.005180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.005349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.005385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.005565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.005597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.005749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.005781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.005959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.005994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.006140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.006177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.006359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.006391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.006598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.006634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.006817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.006849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.007003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.007035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.007224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.007256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.007471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.007506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.007737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.007774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.007947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.007983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.008167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.008200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.008381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.008418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.008647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.008705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.008879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.008915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.009118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.009151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.009332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.009368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.009601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.009658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.009833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.009870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.010051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.010090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.010268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.010304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.010512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.010574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.010787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.010820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.010981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.011013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.011161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.011194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.011380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.011416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.011593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.011629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.011839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.011871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.012045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.012091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.012265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.012301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.012491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.012524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.012654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.012688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.012840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.012889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.013106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.013140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.013312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.822 [2024-07-26 06:28:06.013345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.822 qpair failed and we were unable to recover it. 00:35:54.822 [2024-07-26 06:28:06.013520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.013552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.013762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.013798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.013988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.014024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.014240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.014273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.014452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.014484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.014628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.014664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.014820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.014856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.015028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.015071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.015251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.015284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.015463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.015500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.015713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.015745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.015907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.015939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.016127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.016160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.016311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.016347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.016533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.016566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.016814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.016850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.017054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.017127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.017306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.017343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.017586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.017644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.017850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.017883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.018043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.018085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.018290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.018326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.018651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.018718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.018926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.018962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.019145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.019180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.019382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.019418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.019605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.019667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.019838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.019874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.020057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.020095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.020231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.020268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.020424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.020457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.020664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.020701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.020855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.020887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.021026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.021064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.021264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.021300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.021500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.021536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.021699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.021732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.823 [2024-07-26 06:28:06.021870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.823 [2024-07-26 06:28:06.021921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.823 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.022086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.022122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.022319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.022356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.022529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.022561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.022697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.022729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.022885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.022918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.023133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.023168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.023387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.023422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.023618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.023654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.023798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.023836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.024057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.024124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.024316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.024349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.024492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.024527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.024664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.024703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.024961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.024997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.025156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.025201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.025387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.025445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.025612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.025652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.025828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.025873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.026069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.026103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.026310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.026347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.026491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.026528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.026682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.026719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.026912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.026945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.027111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.027145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.027306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.027340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.027527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.027567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.027779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.027815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.027951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.027985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.028119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.028160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.028357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.028390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.028551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.028585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.824 [2024-07-26 06:28:06.028719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.824 [2024-07-26 06:28:06.028779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.824 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.028967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.029008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.029201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.029235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.029374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.029406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.029599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.029633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.029775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.029812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.029998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.030035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.030232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.030269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.030412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.030445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.030607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.030639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.030781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.030814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.030957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.030990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.031171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.031209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.031395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.031434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.031588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.031627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.031816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.031851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.032014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.032047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.032195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.032228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.032365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.032418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.032642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.032679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.032822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.032859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.033017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.033053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.033214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.033247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.033448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.033481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.033630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.033668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.033822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.033857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.034118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.034167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.034318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.034352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.034508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.034542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.034727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.034767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.034957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.035015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.035283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.035325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.035504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.035538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.035682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.035716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.035880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.035919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.036087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.036121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.036269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.036306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.036449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.036491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.825 qpair failed and we were unable to recover it. 00:35:54.825 [2024-07-26 06:28:06.036680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.825 [2024-07-26 06:28:06.036717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.036870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.036911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.037076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.037109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.037274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.037308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.037466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.037507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.037683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.037716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.037876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.037910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.038082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.038121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.038280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.038325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.038498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.038532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.038728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.038765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.038949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.038990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.039182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.039217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.039377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.039420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.039580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.039613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.039784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.039816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.039956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.039990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.040156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.040191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.040380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.040429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.040600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.040635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.040799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.040834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.040976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.041009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.041162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.041196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.041366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.041400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.041559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.041593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.041783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.041820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.041983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.042017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.042187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.042221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.042402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.042442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.042684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.042716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.042880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.042914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.043083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.043125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.043261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.043293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.043435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.043470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.043624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.043664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.043806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.043838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.043998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.044031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.044208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.826 [2024-07-26 06:28:06.044247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.826 qpair failed and we were unable to recover it. 00:35:54.826 [2024-07-26 06:28:06.044435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.044484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.044751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.044809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.045020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.045064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.045238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.045272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.045455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.045487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.045644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.045677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.045817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.045850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.046054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.046093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.046273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.046310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.046563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.046623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.046826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.046862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.047019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.047053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.047233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.047266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.047499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.047533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.047673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.047706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.047846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.047879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.048009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.048042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.048224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.048277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.048440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.048477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.048636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.048669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.048806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.048840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.049010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.049044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.049201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.049235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.049440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.049473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.049785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.049843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.050004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.050037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.050214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.050247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.050405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.050439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.050585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.050644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.050860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.050919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.051119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.051166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.051357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.051391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.051557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.051593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.051767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.051801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.051939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.051990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.052161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.052195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.052375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.827 [2024-07-26 06:28:06.052409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.827 qpair failed and we were unable to recover it. 00:35:54.827 [2024-07-26 06:28:06.052571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.052605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.052744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.052780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.052967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.053005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.053179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.053213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.053390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.053495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.053656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.053698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.053875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.053910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.054094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.054140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.054318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.054352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.054546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.054584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.054784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.054817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.054961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.054995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.055156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.055190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.055378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.055416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.055554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.055587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.055725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.055777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.055954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.055990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.056206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.056241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.056388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.056422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.056581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.056621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.056764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.056797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.056958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.056992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.057155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.057190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.057364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.057398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.057553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.057587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.057746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.057780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.057969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.058012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.058166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.058200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.058335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.058368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.058533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.058566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.058699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.058731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.058992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.059035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.059225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.059259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.059397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.059430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.059620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.059658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.059801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.059834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.060026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.060071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.828 [2024-07-26 06:28:06.060277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.828 [2024-07-26 06:28:06.060318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.828 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.060482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.060524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.060681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.060714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.060874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.060915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.061069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.061103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.061342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.061375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.061563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.061601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.061764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.061802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.061968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.062005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.062175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.062217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.062388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.062421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.062578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.062615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.062794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.062833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.063021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.063054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.063233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.063266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.063410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.063451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.063589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.063623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.063812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.063845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.064015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.064073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.064245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.064290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.064475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.064509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.064663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.064697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.064868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.064902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.065044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.065096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.065246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.065280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.065451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.065485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.065686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.065727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.065903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.065935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.066119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.066154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.066317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.066354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.066513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.066566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.066747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.066783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.829 qpair failed and we were unable to recover it. 00:35:54.829 [2024-07-26 06:28:06.066923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.829 [2024-07-26 06:28:06.066958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.067189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.067225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.067397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.067446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.067583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.067617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.067762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.067795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.067964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.068000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.068233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.068271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.068450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.068487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.068634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.068671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.068838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.068871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.069012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.069045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.069253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.069289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.069428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.069461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.069633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.069665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.069831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.069865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.070025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.070069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.070224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.070261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.070411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.070447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.070636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.070674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.070856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.070893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.071051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.071110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.071301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.071343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.071481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.071515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.071702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.071739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.071895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.071943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.072084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.072119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.072241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.072275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.072414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.072448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.072587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.072622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.072830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.072865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.073022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.073056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.073271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.073315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.073480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.073514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.073685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.073719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.073850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.073883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.074077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.074123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.074270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.074303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.074471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.830 [2024-07-26 06:28:06.074504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.830 qpair failed and we were unable to recover it. 00:35:54.830 [2024-07-26 06:28:06.074703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.074759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.074938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.074975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.075147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.075184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.075381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.075415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.075544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.075582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.075742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.075783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.075938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.075973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.076167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.076201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.076370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.076404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.076591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.076624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.076755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.076788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.076938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.076972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.077131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.077170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.077352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.077394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.077557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.077591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.077765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.077799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.077984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.078016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.078181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.078215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.078364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.078399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.078589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.078623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.078801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.078834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.079001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.079035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.079186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.079220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.079372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.079405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.079572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.079606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.079784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.079824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.080005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.080039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.080261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.080295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.080472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.080520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.080702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.080735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.080917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.080950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.081137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.081171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.081332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.081365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.081568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.081601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.081733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.081769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.081900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.081936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.082126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.082159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.831 qpair failed and we were unable to recover it. 00:35:54.831 [2024-07-26 06:28:06.082291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.831 [2024-07-26 06:28:06.082328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.082494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.082531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.082698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.082731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.082890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.082927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.083082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.083120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.083284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.083325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.083505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.083539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.083672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.083713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.083879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.083934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.084122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.084156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.084303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.084341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.084535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.084568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.084699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.084731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.084868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.084901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.085039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.085083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.085247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.085280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.085471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.085508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.085653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.085690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.085873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.085906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.086072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.086110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.086246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.086280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.086422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.086470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.086664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.086698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.086856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.086889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.087051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.087095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.087281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.087319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.087482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.087516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.087660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.087696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.087843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.087880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.088050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.088116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.088249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.088282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.088421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.088455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.088594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.088627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.088789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.088841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.089035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.089089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.089236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.089269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.089445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.089478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.089638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.089674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.089844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.089879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.832 [2024-07-26 06:28:06.090003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.832 [2024-07-26 06:28:06.090036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.832 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.090212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.090246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.090393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.090428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.090590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.090624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.090777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.090810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.090945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.090978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.091122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.091156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.091323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.091357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.091545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.091584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.091746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.091779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.091912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.091945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.092120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.092155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.092336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.092374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.092546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.092583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.092757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.092794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.092986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.093019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.093183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.093226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.093411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.093448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.093592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.093628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.093798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.093832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.093964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.093998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.094186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.094222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.094438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.094472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.094644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.094678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.094834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.094868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.095077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.095123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.095330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.095367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.095554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.095587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.095759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.095796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.096013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.096046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.096241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.096279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.096435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.096469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.096638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.096672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.096854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.096891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.097067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.097126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.833 qpair failed and we were unable to recover it. 00:35:54.833 [2024-07-26 06:28:06.097265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.833 [2024-07-26 06:28:06.097298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.097465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.097499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.097702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.097739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.097885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.097922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.098112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.098146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.098294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.098334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.098525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.098560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.098715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.098766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.098961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.098995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.099134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.099169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.099294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.099328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.099544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.099580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.099782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.099815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.099998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.100039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.100256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.100289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.100515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.100548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.100742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.100775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.100930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.100964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.101120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.101154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.101283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.101323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.101490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.101523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.101680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.101717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.101895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.101932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.102137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.102170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.102316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.102349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.102479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.102530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.102713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.102750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.102965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.102999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.103172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.103205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:54.834 [2024-07-26 06:28:06.103332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.834 [2024-07-26 06:28:06.103385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:54.834 qpair failed and we were unable to recover it. 00:35:55.114 [2024-07-26 06:28:06.103559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.114 [2024-07-26 06:28:06.103602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.114 qpair failed and we were unable to recover it. 00:35:55.114 [2024-07-26 06:28:06.103753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.114 [2024-07-26 06:28:06.103787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.114 qpair failed and we were unable to recover it. 00:35:55.114 [2024-07-26 06:28:06.103982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.114 [2024-07-26 06:28:06.104016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.114 qpair failed and we were unable to recover it. 00:35:55.114 [2024-07-26 06:28:06.104183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.114 [2024-07-26 06:28:06.104217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.114 qpair failed and we were unable to recover it. 00:35:55.114 [2024-07-26 06:28:06.104402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.114 [2024-07-26 06:28:06.104435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.114 qpair failed and we were unable to recover it. 00:35:55.114 [2024-07-26 06:28:06.104628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.114 [2024-07-26 06:28:06.104660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.114 qpair failed and we were unable to recover it. 00:35:55.114 [2024-07-26 06:28:06.104848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.114 [2024-07-26 06:28:06.104881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.114 qpair failed and we were unable to recover it. 00:35:55.114 [2024-07-26 06:28:06.105038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.114 [2024-07-26 06:28:06.105080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.114 qpair failed and we were unable to recover it. 00:35:55.114 [2024-07-26 06:28:06.105212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.114 [2024-07-26 06:28:06.105245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.114 qpair failed and we were unable to recover it. 00:35:55.114 [2024-07-26 06:28:06.105400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.114 [2024-07-26 06:28:06.105433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.114 qpair failed and we were unable to recover it. 00:35:55.114 [2024-07-26 06:28:06.105577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.114 [2024-07-26 06:28:06.105629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.114 qpair failed and we were unable to recover it. 00:35:55.114 [2024-07-26 06:28:06.105820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.114 [2024-07-26 06:28:06.105854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.114 qpair failed and we were unable to recover it. 00:35:55.114 [2024-07-26 06:28:06.106007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.114 [2024-07-26 06:28:06.106040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.114 qpair failed and we were unable to recover it. 00:35:55.114 [2024-07-26 06:28:06.106214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.114 [2024-07-26 06:28:06.106246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.114 qpair failed and we were unable to recover it. 00:35:55.114 [2024-07-26 06:28:06.106434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.114 [2024-07-26 06:28:06.106472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.114 qpair failed and we were unable to recover it. 00:35:55.114 [2024-07-26 06:28:06.106684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.114 [2024-07-26 06:28:06.106727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.114 qpair failed and we were unable to recover it. 00:35:55.114 [2024-07-26 06:28:06.106857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.114 [2024-07-26 06:28:06.106889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.114 qpair failed and we were unable to recover it. 00:35:55.114 [2024-07-26 06:28:06.107048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.114 [2024-07-26 06:28:06.107099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.114 qpair failed and we were unable to recover it. 00:35:55.114 [2024-07-26 06:28:06.107231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.114 [2024-07-26 06:28:06.107264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.114 qpair failed and we were unable to recover it. 00:35:55.114 [2024-07-26 06:28:06.107419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.114 [2024-07-26 06:28:06.107451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.114 qpair failed and we were unable to recover it. 00:35:55.114 [2024-07-26 06:28:06.107607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.114 [2024-07-26 06:28:06.107640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.114 qpair failed and we were unable to recover it. 00:35:55.114 [2024-07-26 06:28:06.107825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.114 [2024-07-26 06:28:06.107857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.114 qpair failed and we were unable to recover it. 00:35:55.114 [2024-07-26 06:28:06.108001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.114 [2024-07-26 06:28:06.108034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.114 qpair failed and we were unable to recover it. 00:35:55.114 [2024-07-26 06:28:06.108213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.114 [2024-07-26 06:28:06.108267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.114 qpair failed and we were unable to recover it. 00:35:55.114 [2024-07-26 06:28:06.108446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.114 [2024-07-26 06:28:06.108489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.114 qpair failed and we were unable to recover it. 00:35:55.114 [2024-07-26 06:28:06.108649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.114 [2024-07-26 06:28:06.108705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.114 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.108865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.108900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.109037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.109081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.109243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.109276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.109411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.109443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.109596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.109628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.109807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.109843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.110047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.110090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.110250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.110282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.110418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.110451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.110585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.110618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.110784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.110817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.110974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.111011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.111196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.111229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.111373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.111421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.111593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.111629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.111792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.111825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.111987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.112020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.112173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.112206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.112340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.112373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.112558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.112596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.112809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.112845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.113001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.113034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.113180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.113213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.113346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.113397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.113604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.113654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.113835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.113872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.114052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.114095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.114228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.114262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.114421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.114453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.114611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.114645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.114818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.114855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.115039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.115117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.115 [2024-07-26 06:28:06.115269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.115 [2024-07-26 06:28:06.115302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.115 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.115491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.115524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.115680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.115713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.115926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.115964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.116124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.116158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.116288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.116326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.116548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.116581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.116770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.116820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.117029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.117075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.117225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.117258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.117451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.117483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.117664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.117701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.117896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.117929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.118088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.118122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.118248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.118281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.118482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.118519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.118699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.118735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.118909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.118945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.119142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.119175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.119303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.119336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.119504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.119555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.119760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.119796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.119973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.120006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.120155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.120189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.120324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.120357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.120483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.120515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.120703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.120735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.120910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.120946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.121143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.121177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.121317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.121369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.121549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.121581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.121729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.121764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.121975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.122008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.122143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.122175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.122300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.122333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.116 qpair failed and we were unable to recover it. 00:35:55.116 [2024-07-26 06:28:06.122456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.116 [2024-07-26 06:28:06.122489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.122685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.122717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.122871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.122921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.123164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.123197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.123328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.123360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.123547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.123584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.123730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.123766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.124005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.124037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.124190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.124223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.124376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.124412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.124603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.124661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.124841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.124874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.125014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.125046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.125203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.125236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.125437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.125484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.125637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.125670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.125803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.125835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.126007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.126043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.126233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.126267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.126453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.126486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.126626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.126659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.126860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.126897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.127157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.127190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.127346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.127379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.127595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.127631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.127851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.127884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.128031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.128076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.128231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.128263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.128453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.128507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.128654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.128691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.128869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.128906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.129149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.129181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.129325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.129376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.129548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.129584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.129760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.129796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.117 qpair failed and we were unable to recover it. 00:35:55.117 [2024-07-26 06:28:06.129956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.117 [2024-07-26 06:28:06.129989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.130145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.130178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.130386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.130422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.130590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.130625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.130776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.130808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.130967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.131000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.131126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.131160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.131364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.131400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.131582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.131614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.131788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.131825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.131971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.132007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.132173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.132206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.132370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.132402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.132615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.132671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.132874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.132909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.133122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.133160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.133322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.133355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.133502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.133556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.133737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.133770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.133957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.133989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.134131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.134165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.134337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.134394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.134583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.134616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.134800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.134832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.135066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.135099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.135231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.135264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.135426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.135460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.135619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.135651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.135787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.135819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.136002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.136038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.136217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.136250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.136393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.136429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.136611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.136643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.136814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.136848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.136975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.137007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.137169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.137203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.137365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.137398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.118 [2024-07-26 06:28:06.137607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.118 [2024-07-26 06:28:06.137639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.118 qpair failed and we were unable to recover it. 00:35:55.119 [2024-07-26 06:28:06.137803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.119 [2024-07-26 06:28:06.137836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.119 qpair failed and we were unable to recover it. 00:35:55.119 [2024-07-26 06:28:06.138064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.119 [2024-07-26 06:28:06.138097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.119 qpair failed and we were unable to recover it. 00:35:55.119 [2024-07-26 06:28:06.138284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.119 [2024-07-26 06:28:06.138317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.119 qpair failed and we were unable to recover it. 00:35:55.119 [2024-07-26 06:28:06.138510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.119 [2024-07-26 06:28:06.138542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.119 qpair failed and we were unable to recover it. 00:35:55.119 [2024-07-26 06:28:06.138684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.119 [2024-07-26 06:28:06.138717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.119 qpair failed and we were unable to recover it. 00:35:55.119 [2024-07-26 06:28:06.138918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.119 [2024-07-26 06:28:06.138978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.119 qpair failed and we were unable to recover it. 00:35:55.119 [2024-07-26 06:28:06.139130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.119 [2024-07-26 06:28:06.139163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.119 qpair failed and we were unable to recover it. 00:35:55.119 [2024-07-26 06:28:06.139327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.119 [2024-07-26 06:28:06.139360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.119 qpair failed and we were unable to recover it. 00:35:55.119 [2024-07-26 06:28:06.139547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.119 [2024-07-26 06:28:06.139584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.119 qpair failed and we were unable to recover it. 00:35:55.119 [2024-07-26 06:28:06.139729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.119 [2024-07-26 06:28:06.139765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.119 qpair failed and we were unable to recover it. 00:35:55.119 [2024-07-26 06:28:06.139917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.119 [2024-07-26 06:28:06.139950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.119 qpair failed and we were unable to recover it. 00:35:55.119 [2024-07-26 06:28:06.140111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.119 [2024-07-26 06:28:06.140144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.119 qpair failed and we were unable to recover it. 00:35:55.119 [2024-07-26 06:28:06.140275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.119 [2024-07-26 06:28:06.140308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.119 qpair failed and we were unable to recover it. 00:35:55.119 [2024-07-26 06:28:06.140442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.119 [2024-07-26 06:28:06.140476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.119 qpair failed and we were unable to recover it. 00:35:55.119 [2024-07-26 06:28:06.140678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.119 [2024-07-26 06:28:06.140711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.119 qpair failed and we were unable to recover it. 00:35:55.119 [2024-07-26 06:28:06.140876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.119 [2024-07-26 06:28:06.140909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.119 qpair failed and we were unable to recover it. 00:35:55.119 [2024-07-26 06:28:06.141096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.119 [2024-07-26 06:28:06.141133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.119 qpair failed and we were unable to recover it. 00:35:55.119 [2024-07-26 06:28:06.141312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.119 [2024-07-26 06:28:06.141352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.119 qpair failed and we were unable to recover it. 00:35:55.119 [2024-07-26 06:28:06.141532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.119 [2024-07-26 06:28:06.141565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.119 qpair failed and we were unable to recover it. 00:35:55.119 [2024-07-26 06:28:06.141742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.119 [2024-07-26 06:28:06.141778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.119 qpair failed and we were unable to recover it. 00:35:55.119 [2024-07-26 06:28:06.141952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.119 [2024-07-26 06:28:06.141989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.119 qpair failed and we were unable to recover it. 00:35:55.119 [2024-07-26 06:28:06.142149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.119 [2024-07-26 06:28:06.142185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.119 qpair failed and we were unable to recover it. 00:35:55.119 [2024-07-26 06:28:06.142360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.119 [2024-07-26 06:28:06.142393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.119 qpair failed and we were unable to recover it. 00:35:55.119 [2024-07-26 06:28:06.142544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.119 [2024-07-26 06:28:06.142580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.119 qpair failed and we were unable to recover it. 00:35:55.119 [2024-07-26 06:28:06.142789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.142825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.142970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.143007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.143223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.143256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.143428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.143465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.143662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.143698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.143870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.143907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.144156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.144190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.144401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.144437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.144609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.144646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.144802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.144838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.144990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.145023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.145205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.145241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.145442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.145478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.145672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.145708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.145886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.145918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.146097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.146134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.146281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.146316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.146455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.146491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.146663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.146696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.146884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.146920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.147066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.147103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.147273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.147309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.147485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.147516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.147682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.147718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.147917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.147953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.148168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.148205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.148364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.148395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.148529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.148561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.148734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.148770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.148956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.148988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.149174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.149206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.149403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.149443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.149659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.149694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.149894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.120 [2024-07-26 06:28:06.149934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.120 qpair failed and we were unable to recover it. 00:35:55.120 [2024-07-26 06:28:06.150124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.150157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.150296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.150345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.150516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.150553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.150754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.150790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.150946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.150978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.151116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.151148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.151314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.151346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.151532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.151568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.151722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.151753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.151953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.151989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.152165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.152201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.152374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.152419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.152584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.152617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.152782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.152834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.152982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.153018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.153172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.153207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.153420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.153451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.153652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.153688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.153860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.153896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.154078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.154117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.154302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.154334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.154516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.154548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.154741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.154777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.154949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.154985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.155153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.155185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.155321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.155353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.155541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.155583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.155735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.155772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.155949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.155981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.156136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.156171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.156344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.156379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.156556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.156593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.156781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.156814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.156969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.157002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.157200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.157233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.121 [2024-07-26 06:28:06.157435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.121 [2024-07-26 06:28:06.157470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.121 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.157643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.157675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.157807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.157855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.158029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.158072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.158274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.158310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.158528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.158561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.158780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.158812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.158971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.159002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.159172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.159207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.159366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.159397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.159529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.159580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.159755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.159789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.159963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.159998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.160175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.160208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.160369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.160403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.160575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.160611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.160786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.160822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.161003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.161035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.161228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.161266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.161469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.161506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.161705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.161742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.161889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.161921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.162048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.162087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.162275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.162311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.162492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.162524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.162679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.162711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.162835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.162866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.163014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.163050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.163233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.163268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.163444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.163475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.163602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.163653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.163872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.163908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.164032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.164081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.164243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.164275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.164448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.164484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.164657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.164693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.122 [2024-07-26 06:28:06.164902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.122 [2024-07-26 06:28:06.164934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.122 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.165098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.165131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.165263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.165295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.165526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.165558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.165685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.165726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.165889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.165921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.166053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.166092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.166253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.166305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.166456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.166491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.166709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.166741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.166874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.166906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.167087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.167141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.167299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.167332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.167512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.167545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.167724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.167760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.167959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.167995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.168178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.168214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.168397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.168428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.168606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.168641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.168839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.168871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.169033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.169072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.169235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.169266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.169463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.169517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.169699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.169730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.169862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.169894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.170790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.170832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.171038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.171084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.171254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.171287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.171427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.171459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.171649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.171681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.171855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.171909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.172107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.172143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.172322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.172354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.172511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.172544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.172678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.172711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.172894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.172948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.123 qpair failed and we were unable to recover it. 00:35:55.123 [2024-07-26 06:28:06.173144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.123 [2024-07-26 06:28:06.173176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.173314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.173345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.173521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.173556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.173754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.173790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.173954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.173987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.174169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.174200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.174337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.174368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.174499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.174533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.174660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.174691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.174895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.174926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.175086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.175120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.175276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.175309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.175487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.175523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.175676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.175708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.175865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.175897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.176131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.176169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.176326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.176359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.176546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.176579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.176731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.176765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.176952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.176987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.177164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.177202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.177355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.177387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.177595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.177648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.177832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.177865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.178043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.178085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.178242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.178274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.178432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.178484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.178666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.178698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.178858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.178909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.124 qpair failed and we were unable to recover it. 00:35:55.124 [2024-07-26 06:28:06.179084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.124 [2024-07-26 06:28:06.179137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.179278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.179309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.179532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.179564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.179747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.179789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.180042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.180098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.180250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.180284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.180484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.180520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.180686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.180720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.180871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.180903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.181071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.181103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.181223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.181260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.181421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.181453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.181643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.181675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.181903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.181935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.182078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.182111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.182298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.182334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.182538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.182571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.182798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.182830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.182950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.182981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.183141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.183174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.183342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.183373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.183539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.183575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.183750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.183785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.183975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.184008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.184207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.184239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.184425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.184458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.184659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.184695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.184884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.184916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.185074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.185107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.185297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.185328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.185517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.185552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.185725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.185761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.185944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.185977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.186155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.186191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.186342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.125 [2024-07-26 06:28:06.186377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.125 qpair failed and we were unable to recover it. 00:35:55.125 [2024-07-26 06:28:06.186523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.186558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.186723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.186755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.186918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.186950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.187134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.187166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.187358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.187394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.187565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.187596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.187744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.187781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.187954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.187990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.188198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.188233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.188401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.188437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.188612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.188666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.188803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.188838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.189041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.189085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.189264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.189297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.189477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.189531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.189693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.189733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.189922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.189954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.190146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.190178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.190337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.190373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.190512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.190548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.190696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.190732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.190942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.190974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.191156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.191192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.191364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.191400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.191578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.191611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.191742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.191773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.191959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.192008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.192183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.192216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.192417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.192452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.192627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.192660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.192869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.192906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.193092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.193126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.193261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.193320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.193453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.193485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.193647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.126 [2024-07-26 06:28:06.193696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.126 qpair failed and we were unable to recover it. 00:35:55.126 [2024-07-26 06:28:06.193870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.193906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.194086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.194123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.194298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.194330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.194510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.194562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.194739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.194774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.194960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.194992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.195140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.195173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.195393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.195428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.195588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.195621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.195780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.195811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.195998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.196030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.196204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.196237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.196425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.196460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.196635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.196671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.196845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.196877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.197064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.197101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.197249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.197285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.197501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.197534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.197720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.197752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.197900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.197948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.198110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.198165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.198343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.198379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.198582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.198615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.198748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.198780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.198908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.198939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.199104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.199154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.199314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.199347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.199552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.199588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.199726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.199761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.199895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.199930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.200118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.200151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.200334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.127 [2024-07-26 06:28:06.200369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.127 qpair failed and we were unable to recover it. 00:35:55.127 [2024-07-26 06:28:06.200513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.200547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.200719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.200754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.200912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.200944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.201112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.201145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.201308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.201339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.201499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.201531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.201690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.201722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.201863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.201914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.202124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.202162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.202309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.202345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.202526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.202558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.202740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.202775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.202953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.202985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.203158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.203190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.203463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.203496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.203726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.203779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.203955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.203992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.204181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.204214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.204376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.204407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.204612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.204665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.204842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.204874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.205084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.205120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.205330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.205362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.205552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.205606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.205793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.205826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.205983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.206033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.206191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.206223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.206437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.206473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.206653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.206694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.206845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.206892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.207074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.207108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.207284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.207320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.207489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.207524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.207691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.207726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.207905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.207937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.208112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.128 [2024-07-26 06:28:06.208149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.128 qpair failed and we were unable to recover it. 00:35:55.128 [2024-07-26 06:28:06.208335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.208367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.208570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.208606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.208754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.208786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.208916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.208947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.209141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.209177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.209390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.209422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.209608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.209640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.209796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.209834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.209995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.210027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.210194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.210244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.210439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.210472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.210672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.210727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.211004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.211037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.211287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.211321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.211465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.211497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.211702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.211756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.211954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.211988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.212140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.212176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.212421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.212454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.212722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.212755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.212951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.212987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.213211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.213243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.213479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.213512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.213729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.213782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.213945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.213980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.214162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.214198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.214382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.214414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.214570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.214628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.214808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.214843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.215011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.215047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.129 [2024-07-26 06:28:06.215259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.129 [2024-07-26 06:28:06.215292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.129 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.215499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.215536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.215719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.215759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.215932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.215968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.216155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.216193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.216323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.216357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.216544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.216596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.216768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.216804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.217009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.217045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.217239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.217271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.217446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.217482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.217645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.217678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.217859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.217892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.218039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.218088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.218225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.218262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.218451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.218484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.218645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.218678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.218878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.218914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.219126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.219164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.219348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.219381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.219555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.219605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.219810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.219867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.220034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.220121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.220414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.220448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.220640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.220693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.220917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.220972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.221169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.221202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.221346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.221388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.221597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.221649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.221874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.221931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.222068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.222112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.222294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.222353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.222535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.222585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.222843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.222894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.223103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.223156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.130 [2024-07-26 06:28:06.223342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.130 [2024-07-26 06:28:06.223393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.130 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.223601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.223652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.223783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.223816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.223997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.224030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.224187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.224238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.224461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.224509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.224675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.224710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.224849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.224886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.225042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.225081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.225348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.225382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.225537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.225569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.225712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.225767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.225934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.225981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.226147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.226179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.226328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.226363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.226535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.226570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.226744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.226781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.226954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.226989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.227169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.227201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.227408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.227443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.227658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.227694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.227930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.227965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.228163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.228197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.228356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.228388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.228558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.228594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.228767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.228804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.229005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.229040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.229311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.229344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.229524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.229560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.229763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.229798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.229996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.230032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.230200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.230233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.230399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.230449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.230698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.230734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.230933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.230969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.231156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.231188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.231349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.231380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.131 qpair failed and we were unable to recover it. 00:35:55.131 [2024-07-26 06:28:06.231563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.131 [2024-07-26 06:28:06.231595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.231802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.231838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.231988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.232022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.232234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.232265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.232423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.232454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.232629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.232665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.232885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.232921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.233108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.233141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.233295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.233327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.233453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.233501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.233649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.233689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.233896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.233932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.234087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.234118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.234278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.234309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.234493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.234529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.234670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.234705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.234911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.234946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.235148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.235180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.235358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.235393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.235537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.235573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.235768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.235804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.235972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.236007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.236173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.236205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.236411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.236446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.236611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.236661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.236832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.236868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.237009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.237043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.237203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.237235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.237397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.237429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.237608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.237644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.237866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.237901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.238076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.238126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.238262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.238294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.238505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.238541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.238698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.238747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.238923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.238958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.132 qpair failed and we were unable to recover it. 00:35:55.132 [2024-07-26 06:28:06.239165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.132 [2024-07-26 06:28:06.239198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.239373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:35:55.133 [2024-07-26 06:28:06.239684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.239736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.239898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.239939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.240152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.240187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.240346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.240398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.240609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.240657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.240891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.240945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.241139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.241174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.241313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.241346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.241534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.241582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.241779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.241833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.242014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.242055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.242225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.242256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.242450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.242488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.242656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.242692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.242855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.242909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.243099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.243133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.243268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.243301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.243478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.243514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.243690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.243726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.243885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.243937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.244126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.244160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.244295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.244327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.244542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.244578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.244741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.244773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.244925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.244957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.245083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.245116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.245278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.245315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.245489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.245525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.245726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.245762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.245934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.245966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.246110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.246144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.246346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.246382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.246560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.246597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.246785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.246822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.246966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.133 [2024-07-26 06:28:06.247002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.133 qpair failed and we were unable to recover it. 00:35:55.133 [2024-07-26 06:28:06.247191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.247223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.134 [2024-07-26 06:28:06.247392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.247445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.134 [2024-07-26 06:28:06.247630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.247668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.134 [2024-07-26 06:28:06.247879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.247911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.134 [2024-07-26 06:28:06.248065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.248117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.134 [2024-07-26 06:28:06.248258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.248290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.134 [2024-07-26 06:28:06.248491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.248524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.134 [2024-07-26 06:28:06.248783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.248838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.134 [2024-07-26 06:28:06.249023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.249057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.134 [2024-07-26 06:28:06.249254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.249286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.134 [2024-07-26 06:28:06.249467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.249502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.134 [2024-07-26 06:28:06.249677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.249713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.134 [2024-07-26 06:28:06.249897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.249928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.134 [2024-07-26 06:28:06.250057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.250112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.134 [2024-07-26 06:28:06.250315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.250347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.134 [2024-07-26 06:28:06.250582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.250614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.134 [2024-07-26 06:28:06.250776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.250844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.134 [2024-07-26 06:28:06.251052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.251096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.134 [2024-07-26 06:28:06.251255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.251287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.134 [2024-07-26 06:28:06.251445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.251480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.134 [2024-07-26 06:28:06.251660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.251695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.134 [2024-07-26 06:28:06.251842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.251874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.134 [2024-07-26 06:28:06.252036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.252085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.134 [2024-07-26 06:28:06.252241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.252273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.134 [2024-07-26 06:28:06.252427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.252458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.134 [2024-07-26 06:28:06.252628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.252663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.134 [2024-07-26 06:28:06.252863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.134 [2024-07-26 06:28:06.252898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.134 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.253079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.253112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.253252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.253283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.253424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.253457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.253592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.253625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.253749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.253785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.253981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.254015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.254178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.254210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.254373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.254423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.254564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.254599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.254747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.254778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.254953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.254989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.255166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.255199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.255338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.255371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.255500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.255532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.255700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.255751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.255934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.255966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.256129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.256162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.256296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.256327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.256490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.256523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.256704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.256740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.256914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.256951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.257097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.257129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.257264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.257295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.257540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.257573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.257725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.257758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.257969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.258005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.258203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.258236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.258399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.258431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.258561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.258609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.258781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.258816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.259001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.259032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.259179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.259211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.259353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.259388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.259640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.259674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.259859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.135 [2024-07-26 06:28:06.259907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.135 qpair failed and we were unable to recover it. 00:35:55.135 [2024-07-26 06:28:06.260124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.260157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.260317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.260349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.260531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.260566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.260743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.260779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.260942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.260974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.261138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.261169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.261330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.261382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.261548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.261581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.261740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.261789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.261972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.262013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.262214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.262246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.262382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.262414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.262601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.262633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.262805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.262838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.262966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.263015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.263180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.263213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.263382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.263414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.263558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.263594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.263775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.263810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.263993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.264026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.264223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.264255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.264421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.264457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.264637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.264669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.264801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.264850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.265068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.265118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.265306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.265338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.265499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.265534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.265679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.265715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.265887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.265919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.266090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.266142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.266298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.266330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.266536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.266569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.266749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.266785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.266991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.267027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.267194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.267227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.267401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.136 [2024-07-26 06:28:06.267438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.136 qpair failed and we were unable to recover it. 00:35:55.136 [2024-07-26 06:28:06.267624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.267660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.267807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.267839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.267996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.268044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.268255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.268287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.268454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.268487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.268639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.268675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.268851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.268884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.269019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.269054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.269237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.269273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.269471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.269507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.269667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.269700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.269857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.269907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.270111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.270148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.270335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.270381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.270577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.270613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.270788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.270825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.270971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.271003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.271143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.271176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.271340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.271381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.271574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.271606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.271809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.271844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.272011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.272047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.272220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.272253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.272388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.272421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.272626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.272662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.272823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.272855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.273019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.273081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.273229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.273276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.273459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.273493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.273659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.273692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.273848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.273882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.274072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.274111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.274264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.274298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.274506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.274540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.274675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.274712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.274862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.274901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.275077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.137 [2024-07-26 06:28:06.275114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.137 qpair failed and we were unable to recover it. 00:35:55.137 [2024-07-26 06:28:06.275267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.275303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.275489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.275522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.275725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.275764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.275953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.275994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.276169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.276204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.276349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.276384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.276550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.276592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.276758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.276790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.276939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.276980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.277205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.277239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.277398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.277445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.277627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.277664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.277852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.277902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.278080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.278143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.278315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.278381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.278544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.278576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.278738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.278781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.278918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.278962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.279119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.279152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.279293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.279328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.279474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.279517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.279705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.279739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.279928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.279965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.280142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.280196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.280383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.280416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.280561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.280595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.280750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.280791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.280956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.280988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.281163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.281202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.281379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.281424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.281615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.281650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.281860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.281894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.282064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.282104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.282286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.282323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.138 [2024-07-26 06:28:06.282493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.138 [2024-07-26 06:28:06.282530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.138 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.282702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.282740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.282925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.282959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.283116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.283150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.283314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.283349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.283495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.283528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.283696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.283731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.283892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.283929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.284076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.284110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.284302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.284350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.284496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.284530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.284664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.284697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.284857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.284889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.285022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.285055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.285230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.285263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.285399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.285432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.285563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.285595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.285754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.285786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.285937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.285969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.286132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.286164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.286331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.286373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.286532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.286565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.286696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.286733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.286886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.286918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.287075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.287118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.287254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.287286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.287454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.287486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.287647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.139 [2024-07-26 06:28:06.287680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.139 qpair failed and we were unable to recover it. 00:35:55.139 [2024-07-26 06:28:06.287863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.287895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.288055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.288096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.288232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.288265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.288426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.288458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.288582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.288615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.288772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.288804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.288962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.288995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.289186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.289219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.289347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.289380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.289524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.289572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.289771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.289803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.289968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.290000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.290146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.290179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.290337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.290379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.290552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.290584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.290718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.290750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.290925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.290960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.291118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.291151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.291288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.291321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.291487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.291519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.291650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.291682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.291856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.291904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.292053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.292099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.292259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.292293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.292497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.292540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.292758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.292795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.292930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.292966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.293108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.293142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.293295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.293328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.293502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.293551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.293734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.293766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.293925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.293957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.294115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.294148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.294277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.294309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.294502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.294539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.294694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.140 [2024-07-26 06:28:06.294747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.140 qpair failed and we were unable to recover it. 00:35:55.140 [2024-07-26 06:28:06.294883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.294918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.295069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.295102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.295237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.295268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.295450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.295482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.295617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.295649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.295808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.295847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.296018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.296055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.296250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.296283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.296419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.296451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.296576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.296608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.296764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.296796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.296918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.296950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.297110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.297158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.297350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.297387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.297561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.297612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.297808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.297843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.297980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.298013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.298198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.298246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.298480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.298517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.298686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.298718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.298847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.298879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.299014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.299050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.299194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.299226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.299406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.299442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.299627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.299664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.299847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.299880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.300016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.300053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.300229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.300263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.300403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.300437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.300618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.300651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.300835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.300871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.301012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.301046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.301250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.301284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.301502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.301535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.301665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.301701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.141 qpair failed and we were unable to recover it. 00:35:55.141 [2024-07-26 06:28:06.301873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.141 [2024-07-26 06:28:06.301906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.302040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.302080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.302215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.302248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.302454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.302506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.302676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.302714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.302867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.302899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.303028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.303076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.303262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.303294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.303466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.303498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.303660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.303692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.303865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.303900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.304057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.304095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.304261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.304293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.304438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.304470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.304627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.304659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.304812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.304863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.305052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.305099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.305277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.305312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.305506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.305540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.305704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.305738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.305954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.305991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.306210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.306243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.306374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.306434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.306669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.306700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.306862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.306898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.307043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.307103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.307246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.307278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.307408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.307440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.307610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.307646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.307805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.307837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.308022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.308057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.308225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.308258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.308388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.308422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.308586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.308619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.308790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.308841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.142 [2024-07-26 06:28:06.309012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.142 [2024-07-26 06:28:06.309048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.142 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.309231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.309263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.309408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.309440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.309593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.309625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.309762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.309794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.309976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.310007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.310180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.310213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.310414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.310450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.310638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.310675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.310808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.310840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.311002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.311035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.311213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.311245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.311385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.311430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.311639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.311675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.311854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.311890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.312066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.312098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.312289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.312336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.312506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.312542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.312676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.312711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.312872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.312905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.313072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.313106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.313273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.313306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.313449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.313485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.313674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.313707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.313846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.313879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.314049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.314110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.314280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.314315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.314472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.314505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.314676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.314712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.314864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.314901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.315089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.315134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.315294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.315326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.315516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.315548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.315682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.315714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.315879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.315912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.316092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.143 [2024-07-26 06:28:06.316134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.143 qpair failed and we were unable to recover it. 00:35:55.143 [2024-07-26 06:28:06.316297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.316337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.144 [2024-07-26 06:28:06.316482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.316515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.144 [2024-07-26 06:28:06.316676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.316710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.144 [2024-07-26 06:28:06.316868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.316906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.144 [2024-07-26 06:28:06.317079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.317127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.144 [2024-07-26 06:28:06.317308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.317343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.144 [2024-07-26 06:28:06.317501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.317533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.144 [2024-07-26 06:28:06.317669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.317701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.144 [2024-07-26 06:28:06.317834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.317866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.144 [2024-07-26 06:28:06.317992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.318025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.144 [2024-07-26 06:28:06.318197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.318230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.144 [2024-07-26 06:28:06.318362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.318412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.144 [2024-07-26 06:28:06.318602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.318638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.144 [2024-07-26 06:28:06.318759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.318791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.144 [2024-07-26 06:28:06.318976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.319014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.144 [2024-07-26 06:28:06.319188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.319221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.144 [2024-07-26 06:28:06.319360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.319394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.144 [2024-07-26 06:28:06.319530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.319562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.144 [2024-07-26 06:28:06.319757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.319789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.144 [2024-07-26 06:28:06.319940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.319976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.144 [2024-07-26 06:28:06.320174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.320207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.144 [2024-07-26 06:28:06.320330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.320373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.144 [2024-07-26 06:28:06.320531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.320563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.144 [2024-07-26 06:28:06.320688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.320720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.144 [2024-07-26 06:28:06.320876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.320908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.144 [2024-07-26 06:28:06.321053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.321092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.144 [2024-07-26 06:28:06.321224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.144 [2024-07-26 06:28:06.321256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.144 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.321395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.321427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.321560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.321594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.321785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.321817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.321979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.322013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.322166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.322198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.322330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.322363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.322519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.322561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.322696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.322728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.322859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.322891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.323053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.323093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.323259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.323298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.323463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.323512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.323712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.323755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.323943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.323975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.324135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.324169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.324301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.324334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.324470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.324502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.324654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.324687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.324863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.324899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.325051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.325114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.325252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.325286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.325544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.325581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.325737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.325770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.325915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.325948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.326084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.326119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.326313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.326350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.326537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.326586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.326748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.326780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.326910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.326943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.327106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.327140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.327299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.327332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.327481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.327514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.327666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.327715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.327909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.327942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.145 qpair failed and we were unable to recover it. 00:35:55.145 [2024-07-26 06:28:06.328120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.145 [2024-07-26 06:28:06.328168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.328334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.328380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.328547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.328580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.328716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.328748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.328913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.328968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.329150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.329185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.329348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.329382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.329614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.329651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.329822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.329866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.330082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.330115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.330276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.330311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.330478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.330510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.330674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.330706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.330832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.330864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.331010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.331052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.331196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.331232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.331391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.331429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.331614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.331647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.331811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.331849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.331999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.332031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.332191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.332224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.332373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.332407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.332563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.332613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.332765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.332797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.332959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.332991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.333161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.333197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.333348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.333382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.333526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.333560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.333718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.333768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.333953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.333986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.334149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.334182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.334353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.334387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.334549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.334581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.146 qpair failed and we were unable to recover it. 00:35:55.146 [2024-07-26 06:28:06.334767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.146 [2024-07-26 06:28:06.334843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.335001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.335048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.335245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.335277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.335419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.335451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.335605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.335638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.335793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.335826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.335980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.336031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.336221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.336268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.336455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.336488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.336625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.336658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.336821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.336854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.337015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.337056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.337210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.337243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.337416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.337453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.337631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.337664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.337793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.337826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.338010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.338051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.338199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.338232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.338365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.338416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.338649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.338708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.338894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.338927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.339090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.339124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.339285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.339318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.339459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.339492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.339626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.339658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.339820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.339875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.340073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.340107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.340272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.340305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.340480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.340512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.340675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.340708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.340865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.340907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.341108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.341155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.341358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.341392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.341560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.341593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.341753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.341785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.147 qpair failed and we were unable to recover it. 00:35:55.147 [2024-07-26 06:28:06.341921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.147 [2024-07-26 06:28:06.341955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.342125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.342160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.342329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.342371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.342505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.342538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.342710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.342743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.342914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.342946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.343077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.343111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.343245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.343278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.343418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.343450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.343664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.343697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.343831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.343864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.344029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.344075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.344242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.344275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.344422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.344455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.344620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.344663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.344840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.344900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.345084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.345136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.345308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.345347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.345508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.345541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.345704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.345737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.345939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.345972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.346138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.346172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.346331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.346369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.346510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.346544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.346733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.346766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.346949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.346986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.347217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.347264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.347432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.347466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.347678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.347746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.347889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.347924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.348108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.348146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.348329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.348381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.348567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.348599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.348751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.348784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.348934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.348970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.349155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.349187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.349312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.148 [2024-07-26 06:28:06.349350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.148 qpair failed and we were unable to recover it. 00:35:55.148 [2024-07-26 06:28:06.349506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.349557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.349814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.349870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.350039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.350086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.350268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.350300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.350511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.350547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.350738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.350770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.350889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.350933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.351135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.351172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.351327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.351368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.351486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.351534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.351713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.351749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.351923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.351956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.352140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.352176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.352376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.352412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.352559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.352590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.352790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.352826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.352967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.353003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.353163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.353195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.353376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.353426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.353646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.353706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.353868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.353900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.354087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.354120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.354250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.354283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.354452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.354484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.354726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.354794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.354980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.355012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.355188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.355221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.355412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.149 [2024-07-26 06:28:06.355447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.149 qpair failed and we were unable to recover it. 00:35:55.149 [2024-07-26 06:28:06.355658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.355690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.355817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.355850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.356025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.356068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.356240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.356272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.356431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.356463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.356722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.356784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.356978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.357014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.357196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.357229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.357416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.357452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.357667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.357699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.357882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.357914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.358120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.358153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.358326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.358377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.358579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.358614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.358858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.358915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.359130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.359164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.359302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.359346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.359578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.359625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.359782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.359817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.360021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.360055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.360231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.360265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.360427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.360463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.360674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.360706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.360875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.360937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.361081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.361132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.361289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.361321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.361534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.361571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.361760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.361793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.361993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.362029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.362219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.362252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.362464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.362516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.362698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.362733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.362904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.362983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.363161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.363197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.363402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.363434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.363628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.363685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.363879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.150 [2024-07-26 06:28:06.363936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.150 qpair failed and we were unable to recover it. 00:35:55.150 [2024-07-26 06:28:06.364105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.364137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.364299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.364332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.364583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.364640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.364845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.364881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.365072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.365122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.365286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.365318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.365476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.365508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.365662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.365698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.365868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.365909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.366054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.366103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.366260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.366292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.366424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.366472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.366656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.366689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.366878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.366913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.367068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.367105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.367258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.367290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.367492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.367528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.367668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.367704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.367860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.367892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.368032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.368091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.368291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.368337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.368523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.368558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.368742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.368779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.368957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.368993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.369181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.369213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.369417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.369452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.369617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.369651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.369799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.369831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.369970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.370021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.370258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.370305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.370479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.370513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.370687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.370751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.371000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.371089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.371295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.371327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.151 [2024-07-26 06:28:06.371488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.151 [2024-07-26 06:28:06.371523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.151 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.371697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.371733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.371889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.371920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.372054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.372116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.372319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.372351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.372578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.372610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.372757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.372793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.372995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.373031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.373211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.373243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.373390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.373425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.373577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.373613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.373820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.373852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.374040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.374083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.374299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.374331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.374506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.374546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.374720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.374757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.374908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.374944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.375103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.375136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.375328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.375380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.375550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.375586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.375794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.375826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.376008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.376049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.376249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.376282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.376430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.376462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.376645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.376677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.376866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.376901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.377044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.377084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.377277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.377310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.377504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.377535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.377719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.377751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.377877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.377909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.378120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.378168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.378312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.378346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.378525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.378562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.378714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.378750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.152 [2024-07-26 06:28:06.378903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.152 [2024-07-26 06:28:06.378935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.152 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.379093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.379126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.379300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.379364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.379551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.379585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.379725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.379758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.379886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.379918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.380110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.380143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.380296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.380328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.380548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.380583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.380766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.380798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.380959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.380991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.381124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.381156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.381333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.381365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.381568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.381603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.381819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.381876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.382053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.382113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.382273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.382305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.382442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.382474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.382639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.382671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.382799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.382836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.382993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.383025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.383225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.383258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.383435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.383471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.383658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.383713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.383896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.383928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.384151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.384185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.384398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.384434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.384599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.384631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.384817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.384853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.385026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.385074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.385251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.153 [2024-07-26 06:28:06.385283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.153 qpair failed and we were unable to recover it. 00:35:55.153 [2024-07-26 06:28:06.385461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.385497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.385772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.385829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.386016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.386089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.386249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.386282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.386490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.386537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.386701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.386735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.386928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.386979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.387169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.387202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.387363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.387395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.387536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.387569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.387731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.387781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.387965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.388001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.388220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.388253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.388423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.388459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.388639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.388671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.388852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.388888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.389063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.389100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.389253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.389286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.389471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.389503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.389698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.389733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.389874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.389907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.390066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.390119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.390311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.390358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.390589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.390624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.390780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.390816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.390984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.391021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.391203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.391237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.391412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.391447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.391643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.391708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.154 [2024-07-26 06:28:06.391914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.154 [2024-07-26 06:28:06.391947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.154 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.392096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.392146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.392308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.392340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.392495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.392527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.392698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.392734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.392902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.392938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.393122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.393154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.393312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.393361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.393544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.393575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.393743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.393776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.393911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.393943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.394138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.394171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.394325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.394357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.394506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.394541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.394724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.394756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.394884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.394916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.395053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.395111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.395293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.395325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.395486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.395518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.395704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.395736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.395942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.395974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.396131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.396164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.396349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.396385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.396622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.396678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.396870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.396902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.397065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.397097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.397304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.397370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.397531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.397566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.397768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.397805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.398018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.398054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.398242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.398274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.398487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.398519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.398703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.398735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.398947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.398980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.399143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.399186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.399316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.399368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.155 [2024-07-26 06:28:06.399548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.155 [2024-07-26 06:28:06.399580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.155 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.399783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.399820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.400002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.400040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.400228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.400266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.400445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.400481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.400667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.400700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.400885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.400918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.401089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.401142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.401268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.401301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.401458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.401490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.401640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.401675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.401857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.401892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.402069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.402102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.402258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.402290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.402499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.402532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.402713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.402745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.402920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.402955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.403107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.403144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.403359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.403392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.403555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.403588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.403745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.403794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.403999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.404031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.404179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.404211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.404363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.404411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.404597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.404629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.404809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.404844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.405035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.405073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.405235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.405267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.405419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.405454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.405661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.405697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.405903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.405938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.406127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.406160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.406322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.406354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.406516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.406548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.406725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.406762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.406963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.156 [2024-07-26 06:28:06.406999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.156 qpair failed and we were unable to recover it. 00:35:55.156 [2024-07-26 06:28:06.407165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.407197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.407359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.407391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.407570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.407606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.407822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.407855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.408023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.408055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.408255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.408287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.408443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.408475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.408612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.408649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.408803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.408835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.408968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.409001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.409167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.409200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.409418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.409454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.409592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.409624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.409778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.409828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.410013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.410049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.410248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.410280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.410413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.410464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.410740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.410797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.410975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.411008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.411195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.411233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.411387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.411424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.411635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.411667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.411845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.411882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.412053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.412098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.412275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.412307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.412494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.412531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.412694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.412741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.412965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.413001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.413197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.413230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.413384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.413416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.413575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.413607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.413780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.413815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.413982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.414017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.414208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.414242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.414408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.414441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.414600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.157 [2024-07-26 06:28:06.414649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.157 qpair failed and we were unable to recover it. 00:35:55.157 [2024-07-26 06:28:06.414809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.414841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.415001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.415050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.415216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.415252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.415451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.415483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.415617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.415649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.415820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.415853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.416048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.416095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.416266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.416302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.416470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.416506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.416691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.416724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.416917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.416949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.417134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.417172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.417373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.417405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.417536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.417568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.417720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.417752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.417900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.417936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.418089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.418122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.418278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.418310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.418441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.418473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.418638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.418689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.418826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.418862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.419041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.419079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.419260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.419296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.419473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.419509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.419718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.419750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.419935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.419971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.420112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.420149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.420338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.420370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.420569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.420605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.420747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.420782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.420962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.420995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.421201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.421237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.421396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.421430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.421624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.421656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.421836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.421873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.422051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.422090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.422283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.422315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.422473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.422506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.158 [2024-07-26 06:28:06.422663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.158 [2024-07-26 06:28:06.422695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.158 qpair failed and we were unable to recover it. 00:35:55.159 [2024-07-26 06:28:06.422858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.159 [2024-07-26 06:28:06.422890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.159 qpair failed and we were unable to recover it. 00:35:55.159 [2024-07-26 06:28:06.423110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.159 [2024-07-26 06:28:06.423143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.159 qpair failed and we were unable to recover it. 00:35:55.159 [2024-07-26 06:28:06.423272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.159 [2024-07-26 06:28:06.423304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.159 qpair failed and we were unable to recover it. 00:35:55.159 [2024-07-26 06:28:06.423488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.159 [2024-07-26 06:28:06.423520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.159 qpair failed and we were unable to recover it. 00:35:55.159 [2024-07-26 06:28:06.423704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.159 [2024-07-26 06:28:06.423739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.159 qpair failed and we were unable to recover it. 00:35:55.159 [2024-07-26 06:28:06.423883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.159 [2024-07-26 06:28:06.423919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.159 qpair failed and we were unable to recover it. 00:35:55.159 [2024-07-26 06:28:06.424079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.159 [2024-07-26 06:28:06.424112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.159 qpair failed and we were unable to recover it. 00:35:55.159 [2024-07-26 06:28:06.424277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.159 [2024-07-26 06:28:06.424309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.159 qpair failed and we were unable to recover it. 00:35:55.159 [2024-07-26 06:28:06.424481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.159 [2024-07-26 06:28:06.424517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.159 qpair failed and we were unable to recover it. 00:35:55.159 [2024-07-26 06:28:06.424670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.159 [2024-07-26 06:28:06.424701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.159 qpair failed and we were unable to recover it. 00:35:55.159 [2024-07-26 06:28:06.424868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.159 [2024-07-26 06:28:06.424901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.159 qpair failed and we were unable to recover it. 00:35:55.159 [2024-07-26 06:28:06.425076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.159 [2024-07-26 06:28:06.425109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.159 qpair failed and we were unable to recover it. 00:35:55.159 [2024-07-26 06:28:06.425277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.159 [2024-07-26 06:28:06.425313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.159 qpair failed and we were unable to recover it. 00:35:55.159 [2024-07-26 06:28:06.425473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.159 [2024-07-26 06:28:06.425505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.159 qpair failed and we were unable to recover it. 00:35:55.159 [2024-07-26 06:28:06.425707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.159 [2024-07-26 06:28:06.425740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.159 qpair failed and we were unable to recover it. 00:35:55.159 [2024-07-26 06:28:06.425872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.159 [2024-07-26 06:28:06.425914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.159 qpair failed and we were unable to recover it. 00:35:55.159 [2024-07-26 06:28:06.426101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.159 [2024-07-26 06:28:06.426137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.159 qpair failed and we were unable to recover it. 00:35:55.159 [2024-07-26 06:28:06.426312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.159 [2024-07-26 06:28:06.426345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.159 qpair failed and we were unable to recover it. 00:35:55.159 [2024-07-26 06:28:06.426508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.159 [2024-07-26 06:28:06.426541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.159 qpair failed and we were unable to recover it. 00:35:55.440 [2024-07-26 06:28:06.426723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.440 [2024-07-26 06:28:06.426759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.440 qpair failed and we were unable to recover it. 00:35:55.440 [2024-07-26 06:28:06.426929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.440 [2024-07-26 06:28:06.426965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.440 qpair failed and we were unable to recover it. 00:35:55.440 [2024-07-26 06:28:06.427135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.440 [2024-07-26 06:28:06.427168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.440 qpair failed and we were unable to recover it. 00:35:55.440 [2024-07-26 06:28:06.427341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.440 [2024-07-26 06:28:06.427377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.440 qpair failed and we were unable to recover it. 00:35:55.440 [2024-07-26 06:28:06.427551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.440 [2024-07-26 06:28:06.427587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.440 qpair failed and we were unable to recover it. 00:35:55.440 [2024-07-26 06:28:06.427745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.440 [2024-07-26 06:28:06.427777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.440 qpair failed and we were unable to recover it. 00:35:55.440 [2024-07-26 06:28:06.427935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.440 [2024-07-26 06:28:06.427984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.440 qpair failed and we were unable to recover it. 00:35:55.440 [2024-07-26 06:28:06.428161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.440 [2024-07-26 06:28:06.428197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.440 qpair failed and we were unable to recover it. 00:35:55.440 [2024-07-26 06:28:06.428357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.440 [2024-07-26 06:28:06.428390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.440 qpair failed and we were unable to recover it. 00:35:55.440 [2024-07-26 06:28:06.428603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.440 [2024-07-26 06:28:06.428639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.440 qpair failed and we were unable to recover it. 00:35:55.440 [2024-07-26 06:28:06.428817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.440 [2024-07-26 06:28:06.428850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.440 qpair failed and we were unable to recover it. 00:35:55.440 [2024-07-26 06:28:06.429019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.440 [2024-07-26 06:28:06.429054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.440 qpair failed and we were unable to recover it. 00:35:55.440 [2024-07-26 06:28:06.429215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.440 [2024-07-26 06:28:06.429248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.440 qpair failed and we were unable to recover it. 00:35:55.440 [2024-07-26 06:28:06.429412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.440 [2024-07-26 06:28:06.429462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.440 qpair failed and we were unable to recover it. 00:35:55.440 [2024-07-26 06:28:06.429666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.440 [2024-07-26 06:28:06.429698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.440 qpair failed and we were unable to recover it. 00:35:55.440 [2024-07-26 06:28:06.429915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.440 [2024-07-26 06:28:06.429951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.440 qpair failed and we were unable to recover it. 00:35:55.440 [2024-07-26 06:28:06.430126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.440 [2024-07-26 06:28:06.430164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.440 qpair failed and we were unable to recover it. 00:35:55.440 [2024-07-26 06:28:06.430343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.441 [2024-07-26 06:28:06.430375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.441 qpair failed and we were unable to recover it. 00:35:55.441 [2024-07-26 06:28:06.430551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.441 [2024-07-26 06:28:06.430586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.441 qpair failed and we were unable to recover it. 00:35:55.441 [2024-07-26 06:28:06.430760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.441 [2024-07-26 06:28:06.430796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.441 qpair failed and we were unable to recover it. 00:35:55.441 [2024-07-26 06:28:06.430949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.441 [2024-07-26 06:28:06.430985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.441 qpair failed and we were unable to recover it. 00:35:55.441 [2024-07-26 06:28:06.431112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.441 [2024-07-26 06:28:06.431146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.441 qpair failed and we were unable to recover it. 00:35:55.441 [2024-07-26 06:28:06.431326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.441 [2024-07-26 06:28:06.431358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.441 qpair failed and we were unable to recover it. 00:35:55.441 [2024-07-26 06:28:06.431548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.441 [2024-07-26 06:28:06.431580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.441 qpair failed and we were unable to recover it. 00:35:55.441 [2024-07-26 06:28:06.431761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.441 [2024-07-26 06:28:06.431797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.441 qpair failed and we were unable to recover it. 00:35:55.441 [2024-07-26 06:28:06.432011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.441 [2024-07-26 06:28:06.432043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.441 qpair failed and we were unable to recover it. 00:35:55.441 [2024-07-26 06:28:06.432213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.441 [2024-07-26 06:28:06.432245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.441 qpair failed and we were unable to recover it. 00:35:55.441 [2024-07-26 06:28:06.432387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.441 [2024-07-26 06:28:06.432423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.441 qpair failed and we were unable to recover it. 00:35:55.441 [2024-07-26 06:28:06.432599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.441 [2024-07-26 06:28:06.432635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.441 qpair failed and we were unable to recover it. 00:35:55.441 [2024-07-26 06:28:06.432838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.441 [2024-07-26 06:28:06.432870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.441 qpair failed and we were unable to recover it. 00:35:55.441 [2024-07-26 06:28:06.433019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.441 [2024-07-26 06:28:06.433054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.441 qpair failed and we were unable to recover it. 00:35:55.441 [2024-07-26 06:28:06.433243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.441 [2024-07-26 06:28:06.433280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.441 qpair failed and we were unable to recover it. 00:35:55.441 [2024-07-26 06:28:06.433489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.441 [2024-07-26 06:28:06.433522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.441 qpair failed and we were unable to recover it. 00:35:55.441 [2024-07-26 06:28:06.433674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.441 [2024-07-26 06:28:06.433706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.441 qpair failed and we were unable to recover it. 00:35:55.441 [2024-07-26 06:28:06.433911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.441 [2024-07-26 06:28:06.433946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.441 qpair failed and we were unable to recover it. 00:35:55.441 [2024-07-26 06:28:06.434102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.441 [2024-07-26 06:28:06.434135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.441 qpair failed and we were unable to recover it. 00:35:55.441 [2024-07-26 06:28:06.434333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.441 [2024-07-26 06:28:06.434370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.441 qpair failed and we were unable to recover it. 00:35:55.441 [2024-07-26 06:28:06.434579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.441 [2024-07-26 06:28:06.434615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.441 qpair failed and we were unable to recover it. 00:35:55.441 [2024-07-26 06:28:06.434780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.441 [2024-07-26 06:28:06.434814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.441 qpair failed and we were unable to recover it. 00:35:55.441 [2024-07-26 06:28:06.434990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.441 [2024-07-26 06:28:06.435026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.441 qpair failed and we were unable to recover it. 00:35:55.441 [2024-07-26 06:28:06.435215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.441 [2024-07-26 06:28:06.435248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.441 qpair failed and we were unable to recover it. 00:35:55.441 [2024-07-26 06:28:06.435383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.441 [2024-07-26 06:28:06.435416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.441 qpair failed and we were unable to recover it. 00:35:55.441 [2024-07-26 06:28:06.435565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.441 [2024-07-26 06:28:06.435597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.441 qpair failed and we were unable to recover it. 00:35:55.441 [2024-07-26 06:28:06.435772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.435807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.442 [2024-07-26 06:28:06.435965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.435999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.442 [2024-07-26 06:28:06.436159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.436193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.442 [2024-07-26 06:28:06.436369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.436405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.442 [2024-07-26 06:28:06.436566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.436598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.442 [2024-07-26 06:28:06.436726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.436758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.442 [2024-07-26 06:28:06.436883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.436915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.442 [2024-07-26 06:28:06.437072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.437105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.442 [2024-07-26 06:28:06.437255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.437287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.442 [2024-07-26 06:28:06.437444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.437481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.442 [2024-07-26 06:28:06.437663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.437695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.442 [2024-07-26 06:28:06.437875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.437907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.442 [2024-07-26 06:28:06.438122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.438158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.442 [2024-07-26 06:28:06.438320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.438353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.442 [2024-07-26 06:28:06.438514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.438547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.442 [2024-07-26 06:28:06.438748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.438784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.442 [2024-07-26 06:28:06.438962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.438994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.442 [2024-07-26 06:28:06.439158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.439204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.442 [2024-07-26 06:28:06.439407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.439443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.442 [2024-07-26 06:28:06.439620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.439652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.442 [2024-07-26 06:28:06.439793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.439829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.442 [2024-07-26 06:28:06.440028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.440082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.442 [2024-07-26 06:28:06.440238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.440270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.442 [2024-07-26 06:28:06.440436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.440473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.442 [2024-07-26 06:28:06.440604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.440638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.442 [2024-07-26 06:28:06.440828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.440860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.442 [2024-07-26 06:28:06.441036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.442 [2024-07-26 06:28:06.441081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.442 qpair failed and we were unable to recover it. 00:35:55.443 [2024-07-26 06:28:06.441251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.443 [2024-07-26 06:28:06.441286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.443 qpair failed and we were unable to recover it. 00:35:55.443 [2024-07-26 06:28:06.441462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.443 [2024-07-26 06:28:06.441494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.443 qpair failed and we were unable to recover it. 00:35:55.443 [2024-07-26 06:28:06.441674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.443 [2024-07-26 06:28:06.441710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.443 qpair failed and we were unable to recover it. 00:35:55.443 [2024-07-26 06:28:06.441857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.443 [2024-07-26 06:28:06.441894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.443 qpair failed and we were unable to recover it. 00:35:55.443 [2024-07-26 06:28:06.442081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.443 [2024-07-26 06:28:06.442114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.443 qpair failed and we were unable to recover it. 00:35:55.443 [2024-07-26 06:28:06.442299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.443 [2024-07-26 06:28:06.442335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.443 qpair failed and we were unable to recover it. 00:35:55.443 [2024-07-26 06:28:06.442526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.443 [2024-07-26 06:28:06.442558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.443 qpair failed and we were unable to recover it. 00:35:55.443 [2024-07-26 06:28:06.442743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.443 [2024-07-26 06:28:06.442776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.443 qpair failed and we were unable to recover it. 00:35:55.443 [2024-07-26 06:28:06.442929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.443 [2024-07-26 06:28:06.442965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.443 qpair failed and we were unable to recover it. 00:35:55.443 [2024-07-26 06:28:06.443111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.443 [2024-07-26 06:28:06.443148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.443 qpair failed and we were unable to recover it. 00:35:55.443 [2024-07-26 06:28:06.443312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.443 [2024-07-26 06:28:06.443349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.443 qpair failed and we were unable to recover it. 00:35:55.443 [2024-07-26 06:28:06.443559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.443 [2024-07-26 06:28:06.443595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.443 qpair failed and we were unable to recover it. 00:35:55.443 [2024-07-26 06:28:06.443783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.443 [2024-07-26 06:28:06.443820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.443 qpair failed and we were unable to recover it. 00:35:55.443 [2024-07-26 06:28:06.444007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.443 [2024-07-26 06:28:06.444048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.443 qpair failed and we were unable to recover it. 00:35:55.443 [2024-07-26 06:28:06.444263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.443 [2024-07-26 06:28:06.444299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.443 qpair failed and we were unable to recover it. 00:35:55.443 [2024-07-26 06:28:06.444458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.443 [2024-07-26 06:28:06.444493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.443 qpair failed and we were unable to recover it. 00:35:55.443 [2024-07-26 06:28:06.444650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.443 [2024-07-26 06:28:06.444683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.443 qpair failed and we were unable to recover it. 00:35:55.443 [2024-07-26 06:28:06.444849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.443 [2024-07-26 06:28:06.444882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.443 qpair failed and we were unable to recover it. 00:35:55.443 [2024-07-26 06:28:06.445042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.443 [2024-07-26 06:28:06.445090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.443 qpair failed and we were unable to recover it. 00:35:55.443 [2024-07-26 06:28:06.445246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.443 [2024-07-26 06:28:06.445279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.443 qpair failed and we were unable to recover it. 00:35:55.443 [2024-07-26 06:28:06.445452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.443 [2024-07-26 06:28:06.445491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.443 qpair failed and we were unable to recover it. 00:35:55.443 [2024-07-26 06:28:06.445659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.443 [2024-07-26 06:28:06.445693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.443 qpair failed and we were unable to recover it. 00:35:55.443 [2024-07-26 06:28:06.445879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.443 [2024-07-26 06:28:06.445931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.443 qpair failed and we were unable to recover it. 00:35:55.443 [2024-07-26 06:28:06.446149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.443 [2024-07-26 06:28:06.446183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.443 qpair failed and we were unable to recover it. 00:35:55.443 [2024-07-26 06:28:06.446343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.443 [2024-07-26 06:28:06.446376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.443 qpair failed and we were unable to recover it. 00:35:55.443 [2024-07-26 06:28:06.446533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.443 [2024-07-26 06:28:06.446566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.443 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.446733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.444 [2024-07-26 06:28:06.446778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.444 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.446953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.444 [2024-07-26 06:28:06.446991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.444 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.447162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.444 [2024-07-26 06:28:06.447194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.444 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.447329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.444 [2024-07-26 06:28:06.447361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.444 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.447550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.444 [2024-07-26 06:28:06.447587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.444 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.447752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.444 [2024-07-26 06:28:06.447785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.444 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.447909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.444 [2024-07-26 06:28:06.447942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.444 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.448086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.444 [2024-07-26 06:28:06.448119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.444 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.448274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.444 [2024-07-26 06:28:06.448307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.444 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.448506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.444 [2024-07-26 06:28:06.448538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.444 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.448703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.444 [2024-07-26 06:28:06.448738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.444 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.448939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.444 [2024-07-26 06:28:06.448975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.444 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.449128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.444 [2024-07-26 06:28:06.449161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.444 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.449317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.444 [2024-07-26 06:28:06.449378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.444 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.449576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.444 [2024-07-26 06:28:06.449609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.444 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.449794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.444 [2024-07-26 06:28:06.449826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.444 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.449952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.444 [2024-07-26 06:28:06.450000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.444 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.450176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.444 [2024-07-26 06:28:06.450209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.444 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.450368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.444 [2024-07-26 06:28:06.450401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.444 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.450606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.444 [2024-07-26 06:28:06.450641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.444 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.450837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.444 [2024-07-26 06:28:06.450873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.444 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.451125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.444 [2024-07-26 06:28:06.451158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.444 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.451342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.444 [2024-07-26 06:28:06.451375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.444 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.451560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.444 [2024-07-26 06:28:06.451596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.444 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.451772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.444 [2024-07-26 06:28:06.451804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.444 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.452018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.444 [2024-07-26 06:28:06.452054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.444 qpair failed and we were unable to recover it. 00:35:55.444 [2024-07-26 06:28:06.452203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.452240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.445 qpair failed and we were unable to recover it. 00:35:55.445 [2024-07-26 06:28:06.452456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.452488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.445 qpair failed and we were unable to recover it. 00:35:55.445 [2024-07-26 06:28:06.452665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.452712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.445 qpair failed and we were unable to recover it. 00:35:55.445 [2024-07-26 06:28:06.452873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.452906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.445 qpair failed and we were unable to recover it. 00:35:55.445 [2024-07-26 06:28:06.453068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.453100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.445 qpair failed and we were unable to recover it. 00:35:55.445 [2024-07-26 06:28:06.453235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.453267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.445 qpair failed and we were unable to recover it. 00:35:55.445 [2024-07-26 06:28:06.453480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.453516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.445 qpair failed and we were unable to recover it. 00:35:55.445 [2024-07-26 06:28:06.453702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.453735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.445 qpair failed and we were unable to recover it. 00:35:55.445 [2024-07-26 06:28:06.453920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.453956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.445 qpair failed and we were unable to recover it. 00:35:55.445 [2024-07-26 06:28:06.454128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.454161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.445 qpair failed and we were unable to recover it. 00:35:55.445 [2024-07-26 06:28:06.454314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.454346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.445 qpair failed and we were unable to recover it. 00:35:55.445 [2024-07-26 06:28:06.454529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.454565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.445 qpair failed and we were unable to recover it. 00:35:55.445 [2024-07-26 06:28:06.454801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.454834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.445 qpair failed and we were unable to recover it. 00:35:55.445 [2024-07-26 06:28:06.454986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.455018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.445 qpair failed and we were unable to recover it. 00:35:55.445 [2024-07-26 06:28:06.455166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.455199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.445 qpair failed and we were unable to recover it. 00:35:55.445 [2024-07-26 06:28:06.455333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.455384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.445 qpair failed and we were unable to recover it. 00:35:55.445 [2024-07-26 06:28:06.455562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.455594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.445 qpair failed and we were unable to recover it. 00:35:55.445 [2024-07-26 06:28:06.455736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.455773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.445 qpair failed and we were unable to recover it. 00:35:55.445 [2024-07-26 06:28:06.455983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.456024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.445 qpair failed and we were unable to recover it. 00:35:55.445 [2024-07-26 06:28:06.456208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.456240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.445 qpair failed and we were unable to recover it. 00:35:55.445 [2024-07-26 06:28:06.456401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.456452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.445 qpair failed and we were unable to recover it. 00:35:55.445 [2024-07-26 06:28:06.456650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.456685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.445 qpair failed and we were unable to recover it. 00:35:55.445 [2024-07-26 06:28:06.456871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.456903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.445 qpair failed and we were unable to recover it. 00:35:55.445 [2024-07-26 06:28:06.457053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.457095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.445 qpair failed and we were unable to recover it. 00:35:55.445 [2024-07-26 06:28:06.457251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.457284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.445 qpair failed and we were unable to recover it. 00:35:55.445 [2024-07-26 06:28:06.457446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.457478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.445 qpair failed and we were unable to recover it. 00:35:55.445 [2024-07-26 06:28:06.457636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.445 [2024-07-26 06:28:06.457672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.446 qpair failed and we were unable to recover it. 00:35:55.446 [2024-07-26 06:28:06.457848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.446 [2024-07-26 06:28:06.457884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.446 qpair failed and we were unable to recover it. 00:35:55.446 [2024-07-26 06:28:06.458043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.446 [2024-07-26 06:28:06.458090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.446 qpair failed and we were unable to recover it. 00:35:55.446 [2024-07-26 06:28:06.458222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.446 [2024-07-26 06:28:06.458254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.446 qpair failed and we were unable to recover it. 00:35:55.446 [2024-07-26 06:28:06.458464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.446 [2024-07-26 06:28:06.458499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.446 qpair failed and we were unable to recover it. 00:35:55.446 [2024-07-26 06:28:06.458659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.446 [2024-07-26 06:28:06.458692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.446 qpair failed and we were unable to recover it. 00:35:55.446 [2024-07-26 06:28:06.458918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.446 [2024-07-26 06:28:06.458955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.446 qpair failed and we were unable to recover it. 00:35:55.446 [2024-07-26 06:28:06.459126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.446 [2024-07-26 06:28:06.459160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.446 qpair failed and we were unable to recover it. 00:35:55.446 [2024-07-26 06:28:06.459316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.446 [2024-07-26 06:28:06.459349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.446 qpair failed and we were unable to recover it. 00:35:55.446 [2024-07-26 06:28:06.459503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.446 [2024-07-26 06:28:06.459539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.446 qpair failed and we were unable to recover it. 00:35:55.446 [2024-07-26 06:28:06.459720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.446 [2024-07-26 06:28:06.459756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.446 qpair failed and we were unable to recover it. 00:35:55.446 [2024-07-26 06:28:06.459936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.446 [2024-07-26 06:28:06.459969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.446 qpair failed and we were unable to recover it. 00:35:55.446 [2024-07-26 06:28:06.460127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.446 [2024-07-26 06:28:06.460159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.446 qpair failed and we were unable to recover it. 00:35:55.446 [2024-07-26 06:28:06.460314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.446 [2024-07-26 06:28:06.460366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.446 qpair failed and we were unable to recover it. 00:35:55.446 [2024-07-26 06:28:06.460557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.446 [2024-07-26 06:28:06.460589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.446 qpair failed and we were unable to recover it. 00:35:55.446 [2024-07-26 06:28:06.460769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.446 [2024-07-26 06:28:06.460812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.446 qpair failed and we were unable to recover it. 00:35:55.446 [2024-07-26 06:28:06.460981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.446 [2024-07-26 06:28:06.461017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.446 qpair failed and we were unable to recover it. 00:35:55.446 [2024-07-26 06:28:06.461187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.446 [2024-07-26 06:28:06.461219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.446 qpair failed and we were unable to recover it. 00:35:55.446 [2024-07-26 06:28:06.461393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.446 [2024-07-26 06:28:06.461429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.446 qpair failed and we were unable to recover it. 00:35:55.446 [2024-07-26 06:28:06.461589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.446 [2024-07-26 06:28:06.461626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.446 qpair failed and we were unable to recover it. 00:35:55.446 [2024-07-26 06:28:06.461796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.446 [2024-07-26 06:28:06.461828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.446 qpair failed and we were unable to recover it. 00:35:55.447 [2024-07-26 06:28:06.462032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.447 [2024-07-26 06:28:06.462078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.447 qpair failed and we were unable to recover it. 00:35:55.447 [2024-07-26 06:28:06.462257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.447 [2024-07-26 06:28:06.462289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.447 qpair failed and we were unable to recover it. 00:35:55.447 [2024-07-26 06:28:06.462488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.447 [2024-07-26 06:28:06.462520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.447 qpair failed and we were unable to recover it. 00:35:55.447 [2024-07-26 06:28:06.462685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.447 [2024-07-26 06:28:06.462718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.447 qpair failed and we were unable to recover it. 00:35:55.447 [2024-07-26 06:28:06.462878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.447 [2024-07-26 06:28:06.462927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.447 qpair failed and we were unable to recover it. 00:35:55.447 [2024-07-26 06:28:06.463101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.447 [2024-07-26 06:28:06.463134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.447 qpair failed and we were unable to recover it. 00:35:55.447 [2024-07-26 06:28:06.463293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.447 [2024-07-26 06:28:06.463325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.447 qpair failed and we were unable to recover it. 00:35:55.447 [2024-07-26 06:28:06.463521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.447 [2024-07-26 06:28:06.463557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.447 qpair failed and we were unable to recover it. 00:35:55.447 [2024-07-26 06:28:06.463715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.447 [2024-07-26 06:28:06.463747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.447 qpair failed and we were unable to recover it. 00:35:55.447 [2024-07-26 06:28:06.463927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.447 [2024-07-26 06:28:06.463964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.447 qpair failed and we were unable to recover it. 00:35:55.447 [2024-07-26 06:28:06.464123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.447 [2024-07-26 06:28:06.464157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.447 qpair failed and we were unable to recover it. 00:35:55.447 [2024-07-26 06:28:06.464312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.447 [2024-07-26 06:28:06.464349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.447 qpair failed and we were unable to recover it. 00:35:55.447 [2024-07-26 06:28:06.464516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.447 [2024-07-26 06:28:06.464547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.447 qpair failed and we were unable to recover it. 00:35:55.447 [2024-07-26 06:28:06.464674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.447 [2024-07-26 06:28:06.464706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.447 qpair failed and we were unable to recover it. 00:35:55.447 [2024-07-26 06:28:06.464860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.447 [2024-07-26 06:28:06.464892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.447 qpair failed and we were unable to recover it. 00:35:55.447 [2024-07-26 06:28:06.465074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.447 [2024-07-26 06:28:06.465110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.447 qpair failed and we were unable to recover it. 00:35:55.447 [2024-07-26 06:28:06.465277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.447 [2024-07-26 06:28:06.465313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.447 qpair failed and we were unable to recover it. 00:35:55.447 [2024-07-26 06:28:06.465494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.447 [2024-07-26 06:28:06.465525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.447 qpair failed and we were unable to recover it. 00:35:55.447 [2024-07-26 06:28:06.465729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.447 [2024-07-26 06:28:06.465766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.447 qpair failed and we were unable to recover it. 00:35:55.447 [2024-07-26 06:28:06.465938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.447 [2024-07-26 06:28:06.465985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.447 qpair failed and we were unable to recover it. 00:35:55.447 [2024-07-26 06:28:06.466163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.447 [2024-07-26 06:28:06.466195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.447 qpair failed and we were unable to recover it. 00:35:55.447 [2024-07-26 06:28:06.466379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.447 [2024-07-26 06:28:06.466415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.447 qpair failed and we were unable to recover it. 00:35:55.447 [2024-07-26 06:28:06.466586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.447 [2024-07-26 06:28:06.466621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.447 qpair failed and we were unable to recover it. 00:35:55.447 [2024-07-26 06:28:06.466806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.447 [2024-07-26 06:28:06.466838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.447 qpair failed and we were unable to recover it. 00:35:55.447 [2024-07-26 06:28:06.467010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.447 [2024-07-26 06:28:06.467051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.447 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.467274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.448 [2024-07-26 06:28:06.467306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.448 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.467441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.448 [2024-07-26 06:28:06.467473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.448 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.467656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.448 [2024-07-26 06:28:06.467704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.448 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.467906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.448 [2024-07-26 06:28:06.467938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.448 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.468078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.448 [2024-07-26 06:28:06.468111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.448 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.468294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.448 [2024-07-26 06:28:06.468329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.448 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.468482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.448 [2024-07-26 06:28:06.468518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.448 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.468700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.448 [2024-07-26 06:28:06.468733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.448 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.468872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.448 [2024-07-26 06:28:06.468904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.448 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.469066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.448 [2024-07-26 06:28:06.469118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.448 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.469274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.448 [2024-07-26 06:28:06.469306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.448 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.469472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.448 [2024-07-26 06:28:06.469523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.448 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.469666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.448 [2024-07-26 06:28:06.469702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.448 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.469878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.448 [2024-07-26 06:28:06.469914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.448 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.470127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.448 [2024-07-26 06:28:06.470160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.448 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.470310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.448 [2024-07-26 06:28:06.470359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.448 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.470534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.448 [2024-07-26 06:28:06.470567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.448 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.470702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.448 [2024-07-26 06:28:06.470734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.448 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.470858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.448 [2024-07-26 06:28:06.470891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.448 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.471020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.448 [2024-07-26 06:28:06.471052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.448 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.471220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.448 [2024-07-26 06:28:06.471252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.448 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.471409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.448 [2024-07-26 06:28:06.471441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.448 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.471600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.448 [2024-07-26 06:28:06.471633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.448 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.471848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.448 [2024-07-26 06:28:06.471884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.448 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.472106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.448 [2024-07-26 06:28:06.472142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.448 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.472325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.448 [2024-07-26 06:28:06.472357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.448 qpair failed and we were unable to recover it. 00:35:55.448 [2024-07-26 06:28:06.472488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.472528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.449 qpair failed and we were unable to recover it. 00:35:55.449 [2024-07-26 06:28:06.472735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.472772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.449 qpair failed and we were unable to recover it. 00:35:55.449 [2024-07-26 06:28:06.472988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.473021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.449 qpair failed and we were unable to recover it. 00:35:55.449 [2024-07-26 06:28:06.473218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.473254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.449 qpair failed and we were unable to recover it. 00:35:55.449 [2024-07-26 06:28:06.473468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.473500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.449 qpair failed and we were unable to recover it. 00:35:55.449 [2024-07-26 06:28:06.473685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.473717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.449 qpair failed and we were unable to recover it. 00:35:55.449 [2024-07-26 06:28:06.473848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.473880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.449 qpair failed and we were unable to recover it. 00:35:55.449 [2024-07-26 06:28:06.474039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.474100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.449 qpair failed and we were unable to recover it. 00:35:55.449 [2024-07-26 06:28:06.474284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.474316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.449 qpair failed and we were unable to recover it. 00:35:55.449 [2024-07-26 06:28:06.474502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.474538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.449 qpair failed and we were unable to recover it. 00:35:55.449 [2024-07-26 06:28:06.474691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.474723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.449 qpair failed and we were unable to recover it. 00:35:55.449 [2024-07-26 06:28:06.474903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.474940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.449 qpair failed and we were unable to recover it. 00:35:55.449 [2024-07-26 06:28:06.475135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.475169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.449 qpair failed and we were unable to recover it. 00:35:55.449 [2024-07-26 06:28:06.475331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.475370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.449 qpair failed and we were unable to recover it. 00:35:55.449 [2024-07-26 06:28:06.475544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.475576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.449 qpair failed and we were unable to recover it. 00:35:55.449 [2024-07-26 06:28:06.475747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.475784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.449 qpair failed and we were unable to recover it. 00:35:55.449 [2024-07-26 06:28:06.475978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.476014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.449 qpair failed and we were unable to recover it. 00:35:55.449 [2024-07-26 06:28:06.476217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.476250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.449 qpair failed and we were unable to recover it. 00:35:55.449 [2024-07-26 06:28:06.476404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.476441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.449 qpair failed and we were unable to recover it. 00:35:55.449 [2024-07-26 06:28:06.476609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.476645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.449 qpair failed and we were unable to recover it. 00:35:55.449 [2024-07-26 06:28:06.476825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.476858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.449 qpair failed and we were unable to recover it. 00:35:55.449 [2024-07-26 06:28:06.477072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.477109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.449 qpair failed and we were unable to recover it. 00:35:55.449 [2024-07-26 06:28:06.477310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.477351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.449 qpair failed and we were unable to recover it. 00:35:55.449 [2024-07-26 06:28:06.477531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.477564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.449 qpair failed and we were unable to recover it. 00:35:55.449 [2024-07-26 06:28:06.477701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.477734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.449 qpair failed and we were unable to recover it. 00:35:55.449 [2024-07-26 06:28:06.477889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.477921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.449 qpair failed and we were unable to recover it. 00:35:55.449 [2024-07-26 06:28:06.478155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.449 [2024-07-26 06:28:06.478187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.450 [2024-07-26 06:28:06.478418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.450 [2024-07-26 06:28:06.478450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.450 [2024-07-26 06:28:06.478580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.450 [2024-07-26 06:28:06.478613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.450 [2024-07-26 06:28:06.478800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.450 [2024-07-26 06:28:06.478833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.450 [2024-07-26 06:28:06.478988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.450 [2024-07-26 06:28:06.479025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.450 [2024-07-26 06:28:06.479232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.450 [2024-07-26 06:28:06.479265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.450 [2024-07-26 06:28:06.479415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.450 [2024-07-26 06:28:06.479457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.450 [2024-07-26 06:28:06.479612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.450 [2024-07-26 06:28:06.479646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.450 [2024-07-26 06:28:06.479778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.450 [2024-07-26 06:28:06.479810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.450 [2024-07-26 06:28:06.480016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.450 [2024-07-26 06:28:06.480052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.450 [2024-07-26 06:28:06.480243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.450 [2024-07-26 06:28:06.480275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.450 [2024-07-26 06:28:06.480446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.450 [2024-07-26 06:28:06.480478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.450 [2024-07-26 06:28:06.480648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.450 [2024-07-26 06:28:06.480680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.450 [2024-07-26 06:28:06.480856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.450 [2024-07-26 06:28:06.480893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.450 [2024-07-26 06:28:06.481067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.450 [2024-07-26 06:28:06.481108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.450 [2024-07-26 06:28:06.481316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.450 [2024-07-26 06:28:06.481351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.450 [2024-07-26 06:28:06.481502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.450 [2024-07-26 06:28:06.481537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.450 [2024-07-26 06:28:06.481691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.450 [2024-07-26 06:28:06.481727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.450 [2024-07-26 06:28:06.481901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.450 [2024-07-26 06:28:06.481934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.450 [2024-07-26 06:28:06.482092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.450 [2024-07-26 06:28:06.482126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.450 [2024-07-26 06:28:06.482265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.450 [2024-07-26 06:28:06.482297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.450 [2024-07-26 06:28:06.482486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.450 [2024-07-26 06:28:06.482518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.450 [2024-07-26 06:28:06.482700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.450 [2024-07-26 06:28:06.482736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.450 [2024-07-26 06:28:06.482917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.450 [2024-07-26 06:28:06.482950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.450 [2024-07-26 06:28:06.483110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.450 [2024-07-26 06:28:06.483143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.450 [2024-07-26 06:28:06.483323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.450 [2024-07-26 06:28:06.483355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.450 [2024-07-26 06:28:06.483558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.450 [2024-07-26 06:28:06.483594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.450 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.483785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.483817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.483984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.484016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.484194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.484231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.484421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.484453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.484609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.484642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.484830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.484866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.485037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.485084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.485243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.485276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.485424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.485456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.485609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.485641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.485847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.485883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.486055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.486115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.486269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.486302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.486467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.486500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.486637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.486669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.486823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.486855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.486982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.487015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.487214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.487247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.487404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.487447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.487599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.487635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.487790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.487827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.487989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.488021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.488197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.488231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.488402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.488434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.488559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.488592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.488722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.488754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.451 [2024-07-26 06:28:06.488905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.451 [2024-07-26 06:28:06.488953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.451 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.489148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.489185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.452 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.489391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.489427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.452 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.489606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.489639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.452 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.489794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.489826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.452 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.489959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.489991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.452 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.490196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.490247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.452 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.490406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.490439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.452 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.490644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.490680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.452 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.490858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.490893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.452 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.491078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.491110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.452 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.491267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.491299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.452 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.491478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.491514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.452 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.491721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.491753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.452 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.491909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.491945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.452 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.492125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.492161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.452 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.492347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.492380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.452 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.492532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.492579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.452 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.492715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.492751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.452 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.492951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.492984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.452 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.493164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.493200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.452 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.493404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.493440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.452 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.493611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.493644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.452 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.493797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.493846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.452 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.493985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.494020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.452 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.494224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.494256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.452 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.494462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.494498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.452 qpair failed and we were unable to recover it. 00:35:55.452 [2024-07-26 06:28:06.494637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.452 [2024-07-26 06:28:06.494673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.453 qpair failed and we were unable to recover it. 00:35:55.453 [2024-07-26 06:28:06.494834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.453 [2024-07-26 06:28:06.494867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.453 qpair failed and we were unable to recover it. 00:35:55.453 [2024-07-26 06:28:06.495076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.453 [2024-07-26 06:28:06.495113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.453 qpair failed and we were unable to recover it. 00:35:55.453 [2024-07-26 06:28:06.495314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.453 [2024-07-26 06:28:06.495352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.453 qpair failed and we were unable to recover it. 00:35:55.453 [2024-07-26 06:28:06.495501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.453 [2024-07-26 06:28:06.495534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.453 qpair failed and we were unable to recover it. 00:35:55.453 [2024-07-26 06:28:06.495703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.453 [2024-07-26 06:28:06.495753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.453 qpair failed and we were unable to recover it. 00:35:55.453 [2024-07-26 06:28:06.495940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.453 [2024-07-26 06:28:06.495973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.453 qpair failed and we were unable to recover it. 00:35:55.453 [2024-07-26 06:28:06.496158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.453 [2024-07-26 06:28:06.496190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.453 qpair failed and we were unable to recover it. 00:35:55.453 [2024-07-26 06:28:06.496344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.453 [2024-07-26 06:28:06.496395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.453 qpair failed and we were unable to recover it. 00:35:55.453 [2024-07-26 06:28:06.496544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.453 [2024-07-26 06:28:06.496581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.453 qpair failed and we were unable to recover it. 00:35:55.453 [2024-07-26 06:28:06.496790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.453 [2024-07-26 06:28:06.496822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.453 qpair failed and we were unable to recover it. 00:35:55.453 [2024-07-26 06:28:06.496999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.453 [2024-07-26 06:28:06.497035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.453 qpair failed and we were unable to recover it. 00:35:55.453 [2024-07-26 06:28:06.497193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.453 [2024-07-26 06:28:06.497230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.453 qpair failed and we were unable to recover it. 00:35:55.453 [2024-07-26 06:28:06.497407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.453 [2024-07-26 06:28:06.497439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.453 qpair failed and we were unable to recover it. 00:35:55.453 [2024-07-26 06:28:06.497609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.453 [2024-07-26 06:28:06.497650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.453 qpair failed and we were unable to recover it. 00:35:55.453 [2024-07-26 06:28:06.497866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.453 [2024-07-26 06:28:06.497898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.453 qpair failed and we were unable to recover it. 00:35:55.453 [2024-07-26 06:28:06.498081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.453 [2024-07-26 06:28:06.498114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.453 qpair failed and we were unable to recover it. 00:35:55.453 [2024-07-26 06:28:06.498243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.453 [2024-07-26 06:28:06.498275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.453 qpair failed and we were unable to recover it. 00:35:55.453 [2024-07-26 06:28:06.498434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.453 [2024-07-26 06:28:06.498467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.453 qpair failed and we were unable to recover it. 00:35:55.453 [2024-07-26 06:28:06.498617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.453 [2024-07-26 06:28:06.498649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.453 qpair failed and we were unable to recover it. 00:35:55.453 [2024-07-26 06:28:06.498804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.498841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.454 [2024-07-26 06:28:06.499026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.499066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.454 [2024-07-26 06:28:06.499250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.499282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.454 [2024-07-26 06:28:06.499488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.499523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.454 [2024-07-26 06:28:06.499723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.499758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.454 [2024-07-26 06:28:06.499944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.499976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.454 [2024-07-26 06:28:06.500185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.500221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.454 [2024-07-26 06:28:06.500405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.500438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.454 [2024-07-26 06:28:06.500606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.500638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.454 [2024-07-26 06:28:06.500789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.500825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.454 [2024-07-26 06:28:06.500973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.501008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.454 [2024-07-26 06:28:06.501196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.501229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.454 [2024-07-26 06:28:06.501364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.501396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.454 [2024-07-26 06:28:06.501580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.501612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.454 [2024-07-26 06:28:06.501805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.501837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.454 [2024-07-26 06:28:06.501986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.502021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.454 [2024-07-26 06:28:06.502241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.502273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.454 [2024-07-26 06:28:06.502424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.502457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.454 [2024-07-26 06:28:06.502618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.502651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.454 [2024-07-26 06:28:06.502808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.502860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.454 [2024-07-26 06:28:06.503112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.503145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.454 [2024-07-26 06:28:06.503304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.503353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.454 [2024-07-26 06:28:06.503491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.503527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.454 [2024-07-26 06:28:06.503731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.503763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.454 [2024-07-26 06:28:06.503971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.504006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.454 [2024-07-26 06:28:06.504185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.504221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.454 [2024-07-26 06:28:06.504400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.454 [2024-07-26 06:28:06.504432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.454 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.504569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.455 [2024-07-26 06:28:06.504600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.455 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.504783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.455 [2024-07-26 06:28:06.504815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.455 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.505033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.455 [2024-07-26 06:28:06.505072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.455 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.505229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.455 [2024-07-26 06:28:06.505264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.455 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.505440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.455 [2024-07-26 06:28:06.505475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.455 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.505631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.455 [2024-07-26 06:28:06.505663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.455 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.505805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.455 [2024-07-26 06:28:06.505837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.455 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.505994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.455 [2024-07-26 06:28:06.506039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.455 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.506249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.455 [2024-07-26 06:28:06.506282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.455 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.506418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.455 [2024-07-26 06:28:06.506450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.455 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.506661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.455 [2024-07-26 06:28:06.506696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.455 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.506870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.455 [2024-07-26 06:28:06.506901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.455 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.507090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.455 [2024-07-26 06:28:06.507127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.455 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.507302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.455 [2024-07-26 06:28:06.507339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.455 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.507504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.455 [2024-07-26 06:28:06.507536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.455 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.507721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.455 [2024-07-26 06:28:06.507757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.455 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.507904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.455 [2024-07-26 06:28:06.507939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.455 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.508099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.455 [2024-07-26 06:28:06.508132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.455 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.508295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.455 [2024-07-26 06:28:06.508327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.455 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.508488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.455 [2024-07-26 06:28:06.508521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.455 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.508678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.455 [2024-07-26 06:28:06.508710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.455 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.508857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.455 [2024-07-26 06:28:06.508892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.455 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.509071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.455 [2024-07-26 06:28:06.509108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.455 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.509266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.455 [2024-07-26 06:28:06.509299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.455 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.509435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.455 [2024-07-26 06:28:06.509467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.455 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.509620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.455 [2024-07-26 06:28:06.509652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.455 qpair failed and we were unable to recover it. 00:35:55.455 [2024-07-26 06:28:06.509800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.509832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.509988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.510024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.510183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.510215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.510338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.510370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.510575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.510611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.510818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.510854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.511050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.511090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.511247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.511280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.511456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.511491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.511675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.511707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.511910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.511945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.512140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.512177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.512358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.512390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.512565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.512601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.512749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.512784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.512964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.512996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.513199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.513236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.513388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.513424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.513584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.513615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.513798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.513855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.514023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.514075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.514253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.514290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.514474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.514511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.514684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.514719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.514926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.514958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.515100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.515132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.515316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.515348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.456 qpair failed and we were unable to recover it. 00:35:55.456 [2024-07-26 06:28:06.515533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.456 [2024-07-26 06:28:06.515565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.515699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.457 [2024-07-26 06:28:06.515749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.515904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.457 [2024-07-26 06:28:06.515937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.516104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.457 [2024-07-26 06:28:06.516137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.516309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.457 [2024-07-26 06:28:06.516345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.516494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.457 [2024-07-26 06:28:06.516530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.516702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.457 [2024-07-26 06:28:06.516734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.516885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.457 [2024-07-26 06:28:06.516921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.517121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.457 [2024-07-26 06:28:06.517157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.517333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.457 [2024-07-26 06:28:06.517365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.517540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.457 [2024-07-26 06:28:06.517576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.517780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.457 [2024-07-26 06:28:06.517812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.518012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.457 [2024-07-26 06:28:06.518048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.518264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.457 [2024-07-26 06:28:06.518296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.518506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.457 [2024-07-26 06:28:06.518542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.518728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.457 [2024-07-26 06:28:06.518760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.518891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.457 [2024-07-26 06:28:06.518923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.519122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.457 [2024-07-26 06:28:06.519172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.519343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.457 [2024-07-26 06:28:06.519385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.519551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.457 [2024-07-26 06:28:06.519587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.519784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.457 [2024-07-26 06:28:06.519820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.519998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.457 [2024-07-26 06:28:06.520030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.520193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.457 [2024-07-26 06:28:06.520227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.520420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.457 [2024-07-26 06:28:06.520452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.520635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.457 [2024-07-26 06:28:06.520667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.520823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.457 [2024-07-26 06:28:06.520858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.521023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.457 [2024-07-26 06:28:06.521067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.457 qpair failed and we were unable to recover it. 00:35:55.457 [2024-07-26 06:28:06.521231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.521263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.458 [2024-07-26 06:28:06.521394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.521444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.458 [2024-07-26 06:28:06.521618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.521654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.458 [2024-07-26 06:28:06.521805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.521837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.458 [2024-07-26 06:28:06.521970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.522002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.458 [2024-07-26 06:28:06.522177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.522211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.458 [2024-07-26 06:28:06.522373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.522405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.458 [2024-07-26 06:28:06.522580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.522621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.458 [2024-07-26 06:28:06.522797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.522834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.458 [2024-07-26 06:28:06.523016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.523052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.458 [2024-07-26 06:28:06.523256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.523292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.458 [2024-07-26 06:28:06.523443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.523480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.458 [2024-07-26 06:28:06.523682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.523716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.458 [2024-07-26 06:28:06.523883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.523915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.458 [2024-07-26 06:28:06.524109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.524143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.458 [2024-07-26 06:28:06.524299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.524339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.458 [2024-07-26 06:28:06.524473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.524505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.458 [2024-07-26 06:28:06.524701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.524736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.458 [2024-07-26 06:28:06.524888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.524925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.458 [2024-07-26 06:28:06.525134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.525168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.458 [2024-07-26 06:28:06.525304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.525344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.458 [2024-07-26 06:28:06.525487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.525519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.458 [2024-07-26 06:28:06.525705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.525739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.458 [2024-07-26 06:28:06.525912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.525948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.458 [2024-07-26 06:28:06.526145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.526183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.458 [2024-07-26 06:28:06.526346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.526379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.458 [2024-07-26 06:28:06.526560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.458 [2024-07-26 06:28:06.526598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.458 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.526767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.459 [2024-07-26 06:28:06.526800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.459 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.526968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.459 [2024-07-26 06:28:06.527002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.459 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.527143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.459 [2024-07-26 06:28:06.527188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.459 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.527358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.459 [2024-07-26 06:28:06.527392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.459 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.527575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.459 [2024-07-26 06:28:06.527612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.459 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.527765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.459 [2024-07-26 06:28:06.527809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.459 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.527997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.459 [2024-07-26 06:28:06.528030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.459 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.528208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.459 [2024-07-26 06:28:06.528257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.459 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.528402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.459 [2024-07-26 06:28:06.528440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.459 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.528628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.459 [2024-07-26 06:28:06.528663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.459 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.528888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.459 [2024-07-26 06:28:06.528946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.459 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.529129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.459 [2024-07-26 06:28:06.529171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.459 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.529361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.459 [2024-07-26 06:28:06.529394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.459 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.529554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.459 [2024-07-26 06:28:06.529587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.459 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.529719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.459 [2024-07-26 06:28:06.529751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.459 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.529892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.459 [2024-07-26 06:28:06.529929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.459 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.530070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.459 [2024-07-26 06:28:06.530104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.459 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.530277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.459 [2024-07-26 06:28:06.530310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.459 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.530502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.459 [2024-07-26 06:28:06.530538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.459 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.530698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.459 [2024-07-26 06:28:06.530730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.459 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.530896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.459 [2024-07-26 06:28:06.530933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.459 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.531093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.459 [2024-07-26 06:28:06.531127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.459 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.531313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.459 [2024-07-26 06:28:06.531363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.459 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.531540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.459 [2024-07-26 06:28:06.531578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.459 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.531759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.459 [2024-07-26 06:28:06.531792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.459 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.531952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.459 [2024-07-26 06:28:06.531985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.459 qpair failed and we were unable to recover it. 00:35:55.459 [2024-07-26 06:28:06.532126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.460 [2024-07-26 06:28:06.532160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.460 qpair failed and we were unable to recover it. 00:35:55.460 [2024-07-26 06:28:06.532320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.460 [2024-07-26 06:28:06.532353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.460 qpair failed and we were unable to recover it. 00:35:55.460 [2024-07-26 06:28:06.532568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.460 [2024-07-26 06:28:06.532609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.460 qpair failed and we were unable to recover it. 00:35:55.460 [2024-07-26 06:28:06.532752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.460 [2024-07-26 06:28:06.532792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.460 qpair failed and we were unable to recover it. 00:35:55.460 [2024-07-26 06:28:06.532994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.460 [2024-07-26 06:28:06.533030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.460 qpair failed and we were unable to recover it. 00:35:55.460 [2024-07-26 06:28:06.533216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.460 [2024-07-26 06:28:06.533251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.460 qpair failed and we were unable to recover it. 00:35:55.460 [2024-07-26 06:28:06.533415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.460 [2024-07-26 06:28:06.533450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.460 qpair failed and we were unable to recover it. 00:35:55.460 [2024-07-26 06:28:06.533608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.460 [2024-07-26 06:28:06.533651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.460 qpair failed and we were unable to recover it. 00:35:55.460 [2024-07-26 06:28:06.533792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.460 [2024-07-26 06:28:06.533825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.460 qpair failed and we were unable to recover it. 00:35:55.460 [2024-07-26 06:28:06.533961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.460 [2024-07-26 06:28:06.533995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.460 qpair failed and we were unable to recover it. 00:35:55.460 [2024-07-26 06:28:06.534193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.460 [2024-07-26 06:28:06.534227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.460 qpair failed and we were unable to recover it. 00:35:55.460 [2024-07-26 06:28:06.534411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.460 [2024-07-26 06:28:06.534448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.460 qpair failed and we were unable to recover it. 00:35:55.460 [2024-07-26 06:28:06.534630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.460 [2024-07-26 06:28:06.534667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.460 qpair failed and we were unable to recover it. 00:35:55.460 [2024-07-26 06:28:06.534824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.460 [2024-07-26 06:28:06.534860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.460 qpair failed and we were unable to recover it. 00:35:55.460 [2024-07-26 06:28:06.535024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.460 [2024-07-26 06:28:06.535057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.460 qpair failed and we were unable to recover it. 00:35:55.460 [2024-07-26 06:28:06.535234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.460 [2024-07-26 06:28:06.535267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.460 qpair failed and we were unable to recover it. 00:35:55.460 [2024-07-26 06:28:06.535401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.460 [2024-07-26 06:28:06.535435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.460 qpair failed and we were unable to recover it. 00:35:55.460 [2024-07-26 06:28:06.535650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.460 [2024-07-26 06:28:06.535714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.460 qpair failed and we were unable to recover it. 00:35:55.460 [2024-07-26 06:28:06.535869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.460 [2024-07-26 06:28:06.535905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.460 qpair failed and we were unable to recover it. 00:35:55.460 [2024-07-26 06:28:06.536055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.461 [2024-07-26 06:28:06.536096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.461 qpair failed and we were unable to recover it. 00:35:55.461 [2024-07-26 06:28:06.536283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.461 [2024-07-26 06:28:06.536316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.461 qpair failed and we were unable to recover it. 00:35:55.461 [2024-07-26 06:28:06.536447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.461 [2024-07-26 06:28:06.536480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.461 qpair failed and we were unable to recover it. 00:35:55.461 [2024-07-26 06:28:06.536649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.461 [2024-07-26 06:28:06.536683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.461 qpair failed and we were unable to recover it. 00:35:55.461 [2024-07-26 06:28:06.536818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.461 [2024-07-26 06:28:06.536853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.461 qpair failed and we were unable to recover it. 00:35:55.461 [2024-07-26 06:28:06.537043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.461 [2024-07-26 06:28:06.537084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.461 qpair failed and we were unable to recover it. 00:35:55.461 [2024-07-26 06:28:06.537226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.461 [2024-07-26 06:28:06.537262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.461 qpair failed and we were unable to recover it. 00:35:55.461 [2024-07-26 06:28:06.537430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.461 [2024-07-26 06:28:06.537463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.461 qpair failed and we were unable to recover it. 00:35:55.461 [2024-07-26 06:28:06.537628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.461 [2024-07-26 06:28:06.537662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.461 qpair failed and we were unable to recover it. 00:35:55.461 [2024-07-26 06:28:06.537889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.461 [2024-07-26 06:28:06.537922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.461 qpair failed and we were unable to recover it. 00:35:55.461 [2024-07-26 06:28:06.538083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.461 [2024-07-26 06:28:06.538117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.461 qpair failed and we were unable to recover it. 00:35:55.461 [2024-07-26 06:28:06.538279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.461 [2024-07-26 06:28:06.538313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.461 qpair failed and we were unable to recover it. 00:35:55.461 [2024-07-26 06:28:06.538497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.461 [2024-07-26 06:28:06.538530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.461 qpair failed and we were unable to recover it. 00:35:55.461 [2024-07-26 06:28:06.538670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.461 [2024-07-26 06:28:06.538703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.461 qpair failed and we were unable to recover it. 00:35:55.461 [2024-07-26 06:28:06.538837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.461 [2024-07-26 06:28:06.538888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.461 qpair failed and we were unable to recover it. 00:35:55.461 [2024-07-26 06:28:06.539065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.461 [2024-07-26 06:28:06.539099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.461 qpair failed and we were unable to recover it. 00:35:55.461 [2024-07-26 06:28:06.539300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.461 [2024-07-26 06:28:06.539333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.461 qpair failed and we were unable to recover it. 00:35:55.461 [2024-07-26 06:28:06.539477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.461 [2024-07-26 06:28:06.539510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.461 qpair failed and we were unable to recover it. 00:35:55.461 [2024-07-26 06:28:06.539694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.461 [2024-07-26 06:28:06.539727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.461 qpair failed and we were unable to recover it. 00:35:55.461 [2024-07-26 06:28:06.539897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.461 [2024-07-26 06:28:06.539931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.461 qpair failed and we were unable to recover it. 00:35:55.461 [2024-07-26 06:28:06.540069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.461 [2024-07-26 06:28:06.540102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.461 qpair failed and we were unable to recover it. 00:35:55.461 [2024-07-26 06:28:06.540231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.461 [2024-07-26 06:28:06.540264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.461 qpair failed and we were unable to recover it. 00:35:55.461 [2024-07-26 06:28:06.540452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.461 [2024-07-26 06:28:06.540485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.461 qpair failed and we were unable to recover it. 00:35:55.461 [2024-07-26 06:28:06.540671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.461 [2024-07-26 06:28:06.540704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.461 qpair failed and we were unable to recover it. 00:35:55.461 [2024-07-26 06:28:06.540837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.461 [2024-07-26 06:28:06.540870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.461 qpair failed and we were unable to recover it. 00:35:55.461 [2024-07-26 06:28:06.541055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.461 [2024-07-26 06:28:06.541094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.461 qpair failed and we were unable to recover it. 00:35:55.461 [2024-07-26 06:28:06.541256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.541289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.462 qpair failed and we were unable to recover it. 00:35:55.462 [2024-07-26 06:28:06.541491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.541524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.462 qpair failed and we were unable to recover it. 00:35:55.462 [2024-07-26 06:28:06.541684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.541718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.462 qpair failed and we were unable to recover it. 00:35:55.462 [2024-07-26 06:28:06.541852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.541885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.462 qpair failed and we were unable to recover it. 00:35:55.462 [2024-07-26 06:28:06.542056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.542098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.462 qpair failed and we were unable to recover it. 00:35:55.462 [2024-07-26 06:28:06.542272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.542309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.462 qpair failed and we were unable to recover it. 00:35:55.462 [2024-07-26 06:28:06.542470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.542503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.462 qpair failed and we were unable to recover it. 00:35:55.462 [2024-07-26 06:28:06.542636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.542668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.462 qpair failed and we were unable to recover it. 00:35:55.462 [2024-07-26 06:28:06.542830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.542863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.462 qpair failed and we were unable to recover it. 00:35:55.462 [2024-07-26 06:28:06.543026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.543065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.462 qpair failed and we were unable to recover it. 00:35:55.462 [2024-07-26 06:28:06.543221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.543253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.462 qpair failed and we were unable to recover it. 00:35:55.462 [2024-07-26 06:28:06.543396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.543429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.462 qpair failed and we were unable to recover it. 00:35:55.462 [2024-07-26 06:28:06.543558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.543592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.462 qpair failed and we were unable to recover it. 00:35:55.462 [2024-07-26 06:28:06.543793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.543827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.462 qpair failed and we were unable to recover it. 00:35:55.462 [2024-07-26 06:28:06.543981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.544013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.462 qpair failed and we were unable to recover it. 00:35:55.462 [2024-07-26 06:28:06.544185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.544219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.462 qpair failed and we were unable to recover it. 00:35:55.462 [2024-07-26 06:28:06.544378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.544415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.462 qpair failed and we were unable to recover it. 00:35:55.462 [2024-07-26 06:28:06.544579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.544612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.462 qpair failed and we were unable to recover it. 00:35:55.462 [2024-07-26 06:28:06.544752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.544785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.462 qpair failed and we were unable to recover it. 00:35:55.462 [2024-07-26 06:28:06.544947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.544980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.462 qpair failed and we were unable to recover it. 00:35:55.462 [2024-07-26 06:28:06.545141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.545174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.462 qpair failed and we were unable to recover it. 00:35:55.462 [2024-07-26 06:28:06.545346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.545378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.462 qpair failed and we were unable to recover it. 00:35:55.462 [2024-07-26 06:28:06.545563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.545597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.462 qpair failed and we were unable to recover it. 00:35:55.462 [2024-07-26 06:28:06.545770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.545803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.462 qpair failed and we were unable to recover it. 00:35:55.462 [2024-07-26 06:28:06.546001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.546055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.462 qpair failed and we were unable to recover it. 00:35:55.462 [2024-07-26 06:28:06.546258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.546293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.462 qpair failed and we were unable to recover it. 00:35:55.462 [2024-07-26 06:28:06.546457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.462 [2024-07-26 06:28:06.546500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.546634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.463 [2024-07-26 06:28:06.546670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.546805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.463 [2024-07-26 06:28:06.546838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.546997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.463 [2024-07-26 06:28:06.547029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.547169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.463 [2024-07-26 06:28:06.547202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.547345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.463 [2024-07-26 06:28:06.547378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.547566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.463 [2024-07-26 06:28:06.547599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.547758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.463 [2024-07-26 06:28:06.547792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.547932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.463 [2024-07-26 06:28:06.547965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.548099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.463 [2024-07-26 06:28:06.548134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.548339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.463 [2024-07-26 06:28:06.548379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.548562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.463 [2024-07-26 06:28:06.548598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.548739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.463 [2024-07-26 06:28:06.548773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.548932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.463 [2024-07-26 06:28:06.548968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.549133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.463 [2024-07-26 06:28:06.549167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.549296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.463 [2024-07-26 06:28:06.549333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.549496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.463 [2024-07-26 06:28:06.549548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.549741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.463 [2024-07-26 06:28:06.549774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.549941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.463 [2024-07-26 06:28:06.549974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.550162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.463 [2024-07-26 06:28:06.550195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.550337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.463 [2024-07-26 06:28:06.550370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.550531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.463 [2024-07-26 06:28:06.550564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.550768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.463 [2024-07-26 06:28:06.550805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.550984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.463 [2024-07-26 06:28:06.551019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.551195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.463 [2024-07-26 06:28:06.551229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.551370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.463 [2024-07-26 06:28:06.551402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.551567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.463 [2024-07-26 06:28:06.551600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.463 qpair failed and we were unable to recover it. 00:35:55.463 [2024-07-26 06:28:06.551737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.551773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.551921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.551954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.552148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.552181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.552338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.552376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.552526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.552559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.552742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.552775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.552920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.552954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.553124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.553158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.553314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.553347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.553529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.553565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.553749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.553786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.553992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.554025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.554160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.554195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.554358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.554391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.554522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.554555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.554696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.554729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.554914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.554951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.555126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.555160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.555349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.555381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.555545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.555578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.555734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.555767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.555926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.555959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.556086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.556119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.556284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.556318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.556471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.556511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.556702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.556735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.556928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.464 [2024-07-26 06:28:06.556961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.464 qpair failed and we were unable to recover it. 00:35:55.464 [2024-07-26 06:28:06.557143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.465 [2024-07-26 06:28:06.557177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.465 qpair failed and we were unable to recover it. 00:35:55.465 [2024-07-26 06:28:06.557320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.465 [2024-07-26 06:28:06.557353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.465 qpair failed and we were unable to recover it. 00:35:55.465 [2024-07-26 06:28:06.557492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.465 [2024-07-26 06:28:06.557525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.465 qpair failed and we were unable to recover it. 00:35:55.465 [2024-07-26 06:28:06.557658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.465 [2024-07-26 06:28:06.557691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.465 qpair failed and we were unable to recover it. 00:35:55.465 [2024-07-26 06:28:06.557885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.465 [2024-07-26 06:28:06.557921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.465 qpair failed and we were unable to recover it. 00:35:55.465 [2024-07-26 06:28:06.558107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.465 [2024-07-26 06:28:06.558140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.465 qpair failed and we were unable to recover it. 00:35:55.465 [2024-07-26 06:28:06.558301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.465 [2024-07-26 06:28:06.558334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.465 qpair failed and we were unable to recover it. 00:35:55.465 [2024-07-26 06:28:06.558475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.465 [2024-07-26 06:28:06.558509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.465 qpair failed and we were unable to recover it. 00:35:55.465 [2024-07-26 06:28:06.558671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.465 [2024-07-26 06:28:06.558704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.465 qpair failed and we were unable to recover it. 00:35:55.465 [2024-07-26 06:28:06.558913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.465 [2024-07-26 06:28:06.558950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.465 qpair failed and we were unable to recover it. 00:35:55.465 [2024-07-26 06:28:06.559094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.465 [2024-07-26 06:28:06.559142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.465 qpair failed and we were unable to recover it. 00:35:55.465 [2024-07-26 06:28:06.559298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.465 [2024-07-26 06:28:06.559331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.465 qpair failed and we were unable to recover it. 00:35:55.465 [2024-07-26 06:28:06.559491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.465 [2024-07-26 06:28:06.559524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.465 qpair failed and we were unable to recover it. 00:35:55.465 [2024-07-26 06:28:06.559670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.465 [2024-07-26 06:28:06.559703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.465 qpair failed and we were unable to recover it. 00:35:55.465 [2024-07-26 06:28:06.559835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.465 [2024-07-26 06:28:06.559871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.465 qpair failed and we were unable to recover it. 00:35:55.465 [2024-07-26 06:28:06.560083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.465 [2024-07-26 06:28:06.560120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.465 qpair failed and we were unable to recover it. 00:35:55.465 [2024-07-26 06:28:06.560329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.465 [2024-07-26 06:28:06.560371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.465 qpair failed and we were unable to recover it. 00:35:55.465 [2024-07-26 06:28:06.560552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.465 [2024-07-26 06:28:06.560586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.465 qpair failed and we were unable to recover it. 00:35:55.465 [2024-07-26 06:28:06.560743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.465 [2024-07-26 06:28:06.560775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.465 qpair failed and we were unable to recover it. 00:35:55.465 [2024-07-26 06:28:06.560920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.465 [2024-07-26 06:28:06.560953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.465 qpair failed and we were unable to recover it. 00:35:55.465 [2024-07-26 06:28:06.561116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.465 [2024-07-26 06:28:06.561150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.465 qpair failed and we were unable to recover it. 00:35:55.465 [2024-07-26 06:28:06.561337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.465 [2024-07-26 06:28:06.561370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.465 qpair failed and we were unable to recover it. 00:35:55.465 [2024-07-26 06:28:06.561523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.465 [2024-07-26 06:28:06.561556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.465 qpair failed and we were unable to recover it. 00:35:55.465 [2024-07-26 06:28:06.561726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.465 [2024-07-26 06:28:06.561760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.465 qpair failed and we were unable to recover it. 00:35:55.465 [2024-07-26 06:28:06.561908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.465 [2024-07-26 06:28:06.561945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.465 qpair failed and we were unable to recover it. 00:35:55.465 [2024-07-26 06:28:06.562125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.465 [2024-07-26 06:28:06.562162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.465 qpair failed and we were unable to recover it. 00:35:55.465 [2024-07-26 06:28:06.562349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.466 [2024-07-26 06:28:06.562383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.466 qpair failed and we were unable to recover it. 00:35:55.466 [2024-07-26 06:28:06.562569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.466 [2024-07-26 06:28:06.562602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.466 qpair failed and we were unable to recover it. 00:35:55.466 [2024-07-26 06:28:06.562764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.466 [2024-07-26 06:28:06.562797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.466 qpair failed and we were unable to recover it. 00:35:55.466 [2024-07-26 06:28:06.562985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.466 [2024-07-26 06:28:06.563018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.466 qpair failed and we were unable to recover it. 00:35:55.466 [2024-07-26 06:28:06.563185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.466 [2024-07-26 06:28:06.563222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.466 qpair failed and we were unable to recover it. 00:35:55.466 [2024-07-26 06:28:06.563402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.466 [2024-07-26 06:28:06.563439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.466 qpair failed and we were unable to recover it. 00:35:55.466 [2024-07-26 06:28:06.563624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.466 [2024-07-26 06:28:06.563656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.466 qpair failed and we were unable to recover it. 00:35:55.466 [2024-07-26 06:28:06.563840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.466 [2024-07-26 06:28:06.563873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.466 qpair failed and we were unable to recover it. 00:35:55.466 [2024-07-26 06:28:06.564011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.466 [2024-07-26 06:28:06.564045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.466 qpair failed and we were unable to recover it. 00:35:55.466 [2024-07-26 06:28:06.564229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.466 [2024-07-26 06:28:06.564265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.466 qpair failed and we were unable to recover it. 00:35:55.466 [2024-07-26 06:28:06.564473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.466 [2024-07-26 06:28:06.564510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.466 qpair failed and we were unable to recover it. 00:35:55.466 [2024-07-26 06:28:06.564669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.466 [2024-07-26 06:28:06.564702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.466 qpair failed and we were unable to recover it. 00:35:55.466 [2024-07-26 06:28:06.564862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.466 [2024-07-26 06:28:06.564896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.466 qpair failed and we were unable to recover it. 00:35:55.466 [2024-07-26 06:28:06.565052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.466 [2024-07-26 06:28:06.565095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.466 qpair failed and we were unable to recover it. 00:35:55.466 [2024-07-26 06:28:06.565255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.466 [2024-07-26 06:28:06.565289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.466 qpair failed and we were unable to recover it. 00:35:55.466 [2024-07-26 06:28:06.565423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.466 [2024-07-26 06:28:06.565458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.466 qpair failed and we were unable to recover it. 00:35:55.466 [2024-07-26 06:28:06.565634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.466 [2024-07-26 06:28:06.565667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.466 qpair failed and we were unable to recover it. 00:35:55.466 [2024-07-26 06:28:06.565866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.466 [2024-07-26 06:28:06.565899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.466 qpair failed and we were unable to recover it. 00:35:55.466 [2024-07-26 06:28:06.566057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.466 [2024-07-26 06:28:06.566098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.466 qpair failed and we were unable to recover it. 00:35:55.466 [2024-07-26 06:28:06.566289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.466 [2024-07-26 06:28:06.566322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.466 qpair failed and we were unable to recover it. 00:35:55.466 [2024-07-26 06:28:06.566476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.566512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.566673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.566706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.566837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.566870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.567076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.567110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.567273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.567307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.567467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.567499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.567647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.567681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.567839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.567872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.568087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.568138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.568314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.568350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.568512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.568549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.568706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.568740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.568909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.568945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.569079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.569112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.569304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.569345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.569543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.569580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.569735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.569768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.569956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.569989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.570122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.570158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.570329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.570363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.570526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.570560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.570699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.570732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.570863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.570896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.571082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.571134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.571314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.571347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.571506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.571540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.571693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.467 [2024-07-26 06:28:06.571730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.467 qpair failed and we were unable to recover it. 00:35:55.467 [2024-07-26 06:28:06.571871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.571905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.468 qpair failed and we were unable to recover it. 00:35:55.468 [2024-07-26 06:28:06.572067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.572111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.468 qpair failed and we were unable to recover it. 00:35:55.468 [2024-07-26 06:28:06.572263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.572300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.468 qpair failed and we were unable to recover it. 00:35:55.468 [2024-07-26 06:28:06.572480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.572517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.468 qpair failed and we were unable to recover it. 00:35:55.468 [2024-07-26 06:28:06.572719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.572753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.468 qpair failed and we were unable to recover it. 00:35:55.468 [2024-07-26 06:28:06.572884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.572918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.468 qpair failed and we were unable to recover it. 00:35:55.468 [2024-07-26 06:28:06.573048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.573088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.468 qpair failed and we were unable to recover it. 00:35:55.468 [2024-07-26 06:28:06.573278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.573311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.468 qpair failed and we were unable to recover it. 00:35:55.468 [2024-07-26 06:28:06.573487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.573524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.468 qpair failed and we were unable to recover it. 00:35:55.468 [2024-07-26 06:28:06.573666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.573703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.468 qpair failed and we were unable to recover it. 00:35:55.468 [2024-07-26 06:28:06.573921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.573954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.468 qpair failed and we were unable to recover it. 00:35:55.468 [2024-07-26 06:28:06.574090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.574124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.468 qpair failed and we were unable to recover it. 00:35:55.468 [2024-07-26 06:28:06.574282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.574317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.468 qpair failed and we were unable to recover it. 00:35:55.468 [2024-07-26 06:28:06.574503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.574536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.468 qpair failed and we were unable to recover it. 00:35:55.468 [2024-07-26 06:28:06.574699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.574739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.468 qpair failed and we were unable to recover it. 00:35:55.468 [2024-07-26 06:28:06.574969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.575006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.468 qpair failed and we were unable to recover it. 00:35:55.468 [2024-07-26 06:28:06.575197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.575233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.468 qpair failed and we were unable to recover it. 00:35:55.468 [2024-07-26 06:28:06.575398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.575432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.468 qpair failed and we were unable to recover it. 00:35:55.468 [2024-07-26 06:28:06.575561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.575594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.468 qpair failed and we were unable to recover it. 00:35:55.468 [2024-07-26 06:28:06.575784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.575817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.468 qpair failed and we were unable to recover it. 00:35:55.468 [2024-07-26 06:28:06.576016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.576054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.468 qpair failed and we were unable to recover it. 00:35:55.468 [2024-07-26 06:28:06.576260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.576299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.468 qpair failed and we were unable to recover it. 00:35:55.468 [2024-07-26 06:28:06.576502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.576535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.468 qpair failed and we were unable to recover it. 00:35:55.468 [2024-07-26 06:28:06.576664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.576702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.468 qpair failed and we were unable to recover it. 00:35:55.468 [2024-07-26 06:28:06.576862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.576896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.468 qpair failed and we were unable to recover it. 00:35:55.468 [2024-07-26 06:28:06.577027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.577071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.468 qpair failed and we were unable to recover it. 00:35:55.468 [2024-07-26 06:28:06.577266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.468 [2024-07-26 06:28:06.577301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.469 [2024-07-26 06:28:06.577506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.469 [2024-07-26 06:28:06.577545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.469 [2024-07-26 06:28:06.577731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.469 [2024-07-26 06:28:06.577765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.469 [2024-07-26 06:28:06.577927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.469 [2024-07-26 06:28:06.577961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.469 [2024-07-26 06:28:06.578151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.469 [2024-07-26 06:28:06.578185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.469 [2024-07-26 06:28:06.578361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.469 [2024-07-26 06:28:06.578394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.469 [2024-07-26 06:28:06.578573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.469 [2024-07-26 06:28:06.578612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.469 [2024-07-26 06:28:06.578791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.469 [2024-07-26 06:28:06.578828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.469 [2024-07-26 06:28:06.579004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.469 [2024-07-26 06:28:06.579039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.469 [2024-07-26 06:28:06.579180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.469 [2024-07-26 06:28:06.579213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.469 [2024-07-26 06:28:06.579350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.469 [2024-07-26 06:28:06.579384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.469 [2024-07-26 06:28:06.579523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.469 [2024-07-26 06:28:06.579557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.469 [2024-07-26 06:28:06.579720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.469 [2024-07-26 06:28:06.579752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.469 [2024-07-26 06:28:06.579887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.469 [2024-07-26 06:28:06.579936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.469 [2024-07-26 06:28:06.580119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.469 [2024-07-26 06:28:06.580152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.469 [2024-07-26 06:28:06.580345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.469 [2024-07-26 06:28:06.580378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.469 [2024-07-26 06:28:06.580554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.469 [2024-07-26 06:28:06.580588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.469 [2024-07-26 06:28:06.580763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.469 [2024-07-26 06:28:06.580800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.469 [2024-07-26 06:28:06.580960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.469 [2024-07-26 06:28:06.580996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.469 [2024-07-26 06:28:06.581133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.469 [2024-07-26 06:28:06.581166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.469 [2024-07-26 06:28:06.581296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.469 [2024-07-26 06:28:06.581329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.469 [2024-07-26 06:28:06.581484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.469 [2024-07-26 06:28:06.581517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.469 [2024-07-26 06:28:06.581673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.469 [2024-07-26 06:28:06.581706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.469 [2024-07-26 06:28:06.581876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.469 [2024-07-26 06:28:06.581910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.469 [2024-07-26 06:28:06.582088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.469 [2024-07-26 06:28:06.582121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.469 [2024-07-26 06:28:06.582288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.469 [2024-07-26 06:28:06.582339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.469 [2024-07-26 06:28:06.582543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.469 [2024-07-26 06:28:06.582576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.469 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.582740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.470 [2024-07-26 06:28:06.582773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.470 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.582948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.470 [2024-07-26 06:28:06.582981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.470 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.583129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.470 [2024-07-26 06:28:06.583162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.470 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.583324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.470 [2024-07-26 06:28:06.583357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.470 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.583555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.470 [2024-07-26 06:28:06.583592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.470 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.583743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.470 [2024-07-26 06:28:06.583776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.470 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.583947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.470 [2024-07-26 06:28:06.583984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.470 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.584147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.470 [2024-07-26 06:28:06.584181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.470 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.584316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.470 [2024-07-26 06:28:06.584349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.470 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.584484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.470 [2024-07-26 06:28:06.584519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.470 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.584659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.470 [2024-07-26 06:28:06.584697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.470 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.584870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.470 [2024-07-26 06:28:06.584908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.470 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.585073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.470 [2024-07-26 06:28:06.585120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.470 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.585283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.470 [2024-07-26 06:28:06.585315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.470 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.585477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.470 [2024-07-26 06:28:06.585513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.470 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.585702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.470 [2024-07-26 06:28:06.585735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.470 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.585903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.470 [2024-07-26 06:28:06.585936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.470 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.586148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.470 [2024-07-26 06:28:06.586185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.470 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.586348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.470 [2024-07-26 06:28:06.586381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.470 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.586523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.470 [2024-07-26 06:28:06.586559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.470 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.586729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.470 [2024-07-26 06:28:06.586763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.470 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.586935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.470 [2024-07-26 06:28:06.586968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.470 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.587127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.470 [2024-07-26 06:28:06.587161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.470 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.587325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.470 [2024-07-26 06:28:06.587358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.470 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.587527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.470 [2024-07-26 06:28:06.587561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.470 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.587723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.470 [2024-07-26 06:28:06.587760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.470 qpair failed and we were unable to recover it. 00:35:55.470 [2024-07-26 06:28:06.587926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.471 [2024-07-26 06:28:06.587960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.471 qpair failed and we were unable to recover it. 00:35:55.471 [2024-07-26 06:28:06.588123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.471 [2024-07-26 06:28:06.588157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.471 qpair failed and we were unable to recover it. 00:35:55.471 [2024-07-26 06:28:06.588364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.471 [2024-07-26 06:28:06.588401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.471 qpair failed and we were unable to recover it. 00:35:55.471 [2024-07-26 06:28:06.588558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.471 [2024-07-26 06:28:06.588591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.471 qpair failed and we were unable to recover it. 00:35:55.471 [2024-07-26 06:28:06.588718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.471 [2024-07-26 06:28:06.588751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.471 qpair failed and we were unable to recover it. 00:35:55.471 [2024-07-26 06:28:06.588942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.471 [2024-07-26 06:28:06.588979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.471 qpair failed and we were unable to recover it. 00:35:55.471 [2024-07-26 06:28:06.589169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.471 [2024-07-26 06:28:06.589202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.471 qpair failed and we were unable to recover it. 00:35:55.471 [2024-07-26 06:28:06.589335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.471 [2024-07-26 06:28:06.589368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.471 qpair failed and we were unable to recover it. 00:35:55.471 [2024-07-26 06:28:06.589527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.471 [2024-07-26 06:28:06.589561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.471 qpair failed and we were unable to recover it. 00:35:55.471 [2024-07-26 06:28:06.589722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.471 [2024-07-26 06:28:06.589756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.471 qpair failed and we were unable to recover it. 00:35:55.471 [2024-07-26 06:28:06.589916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.471 [2024-07-26 06:28:06.589949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.471 qpair failed and we were unable to recover it. 00:35:55.471 [2024-07-26 06:28:06.590116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.471 [2024-07-26 06:28:06.590150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.471 qpair failed and we were unable to recover it. 00:35:55.471 [2024-07-26 06:28:06.590282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.471 [2024-07-26 06:28:06.590315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.471 qpair failed and we were unable to recover it. 00:35:55.471 [2024-07-26 06:28:06.590471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.471 [2024-07-26 06:28:06.590504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.471 qpair failed and we were unable to recover it. 00:35:55.471 [2024-07-26 06:28:06.590657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.471 [2024-07-26 06:28:06.590691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.471 qpair failed and we were unable to recover it. 00:35:55.471 [2024-07-26 06:28:06.590851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.471 [2024-07-26 06:28:06.590885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.471 qpair failed and we were unable to recover it. 00:35:55.471 [2024-07-26 06:28:06.591056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.471 [2024-07-26 06:28:06.591095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.471 qpair failed and we were unable to recover it. 00:35:55.471 [2024-07-26 06:28:06.591265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.471 [2024-07-26 06:28:06.591302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.471 qpair failed and we were unable to recover it. 00:35:55.471 [2024-07-26 06:28:06.591481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.471 [2024-07-26 06:28:06.591514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.471 qpair failed and we were unable to recover it. 00:35:55.471 [2024-07-26 06:28:06.591699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.471 [2024-07-26 06:28:06.591732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.471 qpair failed and we were unable to recover it. 00:35:55.471 [2024-07-26 06:28:06.591899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.471 [2024-07-26 06:28:06.591933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.471 qpair failed and we were unable to recover it. 00:35:55.471 [2024-07-26 06:28:06.592109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.471 [2024-07-26 06:28:06.592143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.471 qpair failed and we were unable to recover it. 00:35:55.471 [2024-07-26 06:28:06.592307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.471 [2024-07-26 06:28:06.592340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.471 qpair failed and we were unable to recover it. 00:35:55.471 [2024-07-26 06:28:06.592540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.471 [2024-07-26 06:28:06.592577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.471 qpair failed and we were unable to recover it. 00:35:55.471 [2024-07-26 06:28:06.592762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.471 [2024-07-26 06:28:06.592802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.471 qpair failed and we were unable to recover it. 00:35:55.471 [2024-07-26 06:28:06.592990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.471 [2024-07-26 06:28:06.593023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.471 qpair failed and we were unable to recover it. 00:35:55.471 [2024-07-26 06:28:06.593195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.472 [2024-07-26 06:28:06.593229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.472 qpair failed and we were unable to recover it. 00:35:55.472 [2024-07-26 06:28:06.593370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.472 [2024-07-26 06:28:06.593406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.472 qpair failed and we were unable to recover it. 00:35:55.472 [2024-07-26 06:28:06.593552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.472 [2024-07-26 06:28:06.593604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.472 qpair failed and we were unable to recover it. 00:35:55.472 [2024-07-26 06:28:06.593793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.472 [2024-07-26 06:28:06.593829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.472 qpair failed and we were unable to recover it. 00:35:55.472 [2024-07-26 06:28:06.593962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.472 [2024-07-26 06:28:06.593995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.472 qpair failed and we were unable to recover it. 00:35:55.472 [2024-07-26 06:28:06.594162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.472 [2024-07-26 06:28:06.594196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.472 qpair failed and we were unable to recover it. 00:35:55.472 [2024-07-26 06:28:06.594363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.472 [2024-07-26 06:28:06.594397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.472 qpair failed and we were unable to recover it. 00:35:55.472 [2024-07-26 06:28:06.594560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.472 [2024-07-26 06:28:06.594593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.472 qpair failed and we were unable to recover it. 00:35:55.472 [2024-07-26 06:28:06.594756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.472 [2024-07-26 06:28:06.594793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.472 qpair failed and we were unable to recover it. 00:35:55.472 [2024-07-26 06:28:06.594944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.472 [2024-07-26 06:28:06.594978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.472 qpair failed and we were unable to recover it. 00:35:55.472 [2024-07-26 06:28:06.595129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.472 [2024-07-26 06:28:06.595163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.472 qpair failed and we were unable to recover it. 00:35:55.472 [2024-07-26 06:28:06.595293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.472 [2024-07-26 06:28:06.595326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.472 qpair failed and we were unable to recover it. 00:35:55.472 [2024-07-26 06:28:06.595459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.472 [2024-07-26 06:28:06.595492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.472 qpair failed and we were unable to recover it. 00:35:55.472 [2024-07-26 06:28:06.595659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.472 [2024-07-26 06:28:06.595691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.472 qpair failed and we were unable to recover it. 00:35:55.472 [2024-07-26 06:28:06.595896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.472 [2024-07-26 06:28:06.595933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.472 qpair failed and we were unable to recover it. 00:35:55.472 [2024-07-26 06:28:06.596084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.472 [2024-07-26 06:28:06.596121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.472 qpair failed and we were unable to recover it. 00:35:55.472 [2024-07-26 06:28:06.596282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.472 [2024-07-26 06:28:06.596316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.472 qpair failed and we were unable to recover it. 00:35:55.472 [2024-07-26 06:28:06.596465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.472 [2024-07-26 06:28:06.596499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.472 qpair failed and we were unable to recover it. 00:35:55.472 [2024-07-26 06:28:06.596660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.472 [2024-07-26 06:28:06.596694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.472 qpair failed and we were unable to recover it. 00:35:55.472 [2024-07-26 06:28:06.596855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.472 [2024-07-26 06:28:06.596889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.472 qpair failed and we were unable to recover it. 00:35:55.472 [2024-07-26 06:28:06.597072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.472 [2024-07-26 06:28:06.597123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.472 qpair failed and we were unable to recover it. 00:35:55.472 [2024-07-26 06:28:06.597287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.472 [2024-07-26 06:28:06.597320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.472 qpair failed and we were unable to recover it. 00:35:55.472 [2024-07-26 06:28:06.597496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.472 [2024-07-26 06:28:06.597529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.472 qpair failed and we were unable to recover it. 00:35:55.472 [2024-07-26 06:28:06.597685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.472 [2024-07-26 06:28:06.597719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.472 qpair failed and we were unable to recover it. 00:35:55.472 [2024-07-26 06:28:06.597877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.472 [2024-07-26 06:28:06.597959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.472 qpair failed and we were unable to recover it. 00:35:55.472 [2024-07-26 06:28:06.598148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.472 [2024-07-26 06:28:06.598182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.472 qpair failed and we were unable to recover it. 00:35:55.472 [2024-07-26 06:28:06.598359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.473 [2024-07-26 06:28:06.598395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.473 qpair failed and we were unable to recover it. 00:35:55.473 [2024-07-26 06:28:06.598565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.473 [2024-07-26 06:28:06.598602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.473 qpair failed and we were unable to recover it. 00:35:55.473 [2024-07-26 06:28:06.598780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.473 [2024-07-26 06:28:06.598812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.473 qpair failed and we were unable to recover it. 00:35:55.473 [2024-07-26 06:28:06.599006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.473 [2024-07-26 06:28:06.599053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.473 qpair failed and we were unable to recover it. 00:35:55.473 [2024-07-26 06:28:06.599251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.473 [2024-07-26 06:28:06.599289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.473 qpair failed and we were unable to recover it. 00:35:55.473 [2024-07-26 06:28:06.599467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.473 [2024-07-26 06:28:06.599501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.473 qpair failed and we were unable to recover it. 00:35:55.473 [2024-07-26 06:28:06.599692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.473 [2024-07-26 06:28:06.599747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.473 qpair failed and we were unable to recover it. 00:35:55.473 [2024-07-26 06:28:06.599951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.473 [2024-07-26 06:28:06.599987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.473 qpair failed and we were unable to recover it. 00:35:55.473 [2024-07-26 06:28:06.600179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.473 [2024-07-26 06:28:06.600213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.473 qpair failed and we were unable to recover it. 00:35:55.473 [2024-07-26 06:28:06.600373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.473 [2024-07-26 06:28:06.600408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.473 qpair failed and we were unable to recover it. 00:35:55.473 [2024-07-26 06:28:06.600548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.473 [2024-07-26 06:28:06.600584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.473 qpair failed and we were unable to recover it. 00:35:55.473 [2024-07-26 06:28:06.600763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.473 [2024-07-26 06:28:06.600795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.473 qpair failed and we were unable to recover it. 00:35:55.473 [2024-07-26 06:28:06.600962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.473 [2024-07-26 06:28:06.601001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.473 qpair failed and we were unable to recover it. 00:35:55.473 [2024-07-26 06:28:06.601160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.473 [2024-07-26 06:28:06.601194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.473 qpair failed and we were unable to recover it. 00:35:55.473 [2024-07-26 06:28:06.601328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.473 [2024-07-26 06:28:06.601361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.473 qpair failed and we were unable to recover it. 00:35:55.473 [2024-07-26 06:28:06.601519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.473 [2024-07-26 06:28:06.601569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.473 qpair failed and we were unable to recover it. 00:35:55.473 [2024-07-26 06:28:06.601775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.473 [2024-07-26 06:28:06.601807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.473 qpair failed and we were unable to recover it. 00:35:55.473 [2024-07-26 06:28:06.601965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.473 [2024-07-26 06:28:06.601998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.473 qpair failed and we were unable to recover it. 00:35:55.473 [2024-07-26 06:28:06.602189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.473 [2024-07-26 06:28:06.602227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.473 qpair failed and we were unable to recover it. 00:35:55.473 [2024-07-26 06:28:06.602406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.473 [2024-07-26 06:28:06.602443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.473 qpair failed and we were unable to recover it. 00:35:55.473 [2024-07-26 06:28:06.602604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.473 [2024-07-26 06:28:06.602636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.473 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.602769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.474 [2024-07-26 06:28:06.602803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.474 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.602964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.474 [2024-07-26 06:28:06.603015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.474 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.603227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.474 [2024-07-26 06:28:06.603260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.474 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.603543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.474 [2024-07-26 06:28:06.603600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.474 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.603784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.474 [2024-07-26 06:28:06.603818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.474 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.604001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.474 [2024-07-26 06:28:06.604037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.474 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.604225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.474 [2024-07-26 06:28:06.604259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.474 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.604444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.474 [2024-07-26 06:28:06.604480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.474 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.604631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.474 [2024-07-26 06:28:06.604663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.474 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.604824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.474 [2024-07-26 06:28:06.604857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.474 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.605044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.474 [2024-07-26 06:28:06.605107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.474 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.605325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.474 [2024-07-26 06:28:06.605357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.474 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.605546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.474 [2024-07-26 06:28:06.605605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.474 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.605743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.474 [2024-07-26 06:28:06.605778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.474 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.605969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.474 [2024-07-26 06:28:06.606002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.474 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.606218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.474 [2024-07-26 06:28:06.606272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.474 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.606471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.474 [2024-07-26 06:28:06.606506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.474 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.606671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.474 [2024-07-26 06:28:06.606705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.474 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.606913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.474 [2024-07-26 06:28:06.606979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.474 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.607171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.474 [2024-07-26 06:28:06.607210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.474 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.607394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.474 [2024-07-26 06:28:06.607427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.474 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.607644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.474 [2024-07-26 06:28:06.607706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.474 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.607850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.474 [2024-07-26 06:28:06.607886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.474 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.608092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.474 [2024-07-26 06:28:06.608125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.474 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.608274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.474 [2024-07-26 06:28:06.608310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.474 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.608462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.474 [2024-07-26 06:28:06.608498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.474 qpair failed and we were unable to recover it. 00:35:55.474 [2024-07-26 06:28:06.608685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.608717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.475 [2024-07-26 06:28:06.608847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.608879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.475 [2024-07-26 06:28:06.609074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.609108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.475 [2024-07-26 06:28:06.609264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.609296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.475 [2024-07-26 06:28:06.609524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.609583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.475 [2024-07-26 06:28:06.609795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.609832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.475 [2024-07-26 06:28:06.609960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.609992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.475 [2024-07-26 06:28:06.610166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.610203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.475 [2024-07-26 06:28:06.610388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.610420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.475 [2024-07-26 06:28:06.610602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.610634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.475 [2024-07-26 06:28:06.610816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.610852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.475 [2024-07-26 06:28:06.611038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.611081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.475 [2024-07-26 06:28:06.611241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.611275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.475 [2024-07-26 06:28:06.611482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.611536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.475 [2024-07-26 06:28:06.611695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.611734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.475 [2024-07-26 06:28:06.611891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.611924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.475 [2024-07-26 06:28:06.612126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.612166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.475 [2024-07-26 06:28:06.612340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.612377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.475 [2024-07-26 06:28:06.612579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.612612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.475 [2024-07-26 06:28:06.612805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.612843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.475 [2024-07-26 06:28:06.613019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.613055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.475 [2024-07-26 06:28:06.613251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.613284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.475 [2024-07-26 06:28:06.613419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.613470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.475 [2024-07-26 06:28:06.613640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.613675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.475 [2024-07-26 06:28:06.613830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.613862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.475 [2024-07-26 06:28:06.614036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.614082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.475 [2024-07-26 06:28:06.614286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.475 [2024-07-26 06:28:06.614318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.475 qpair failed and we were unable to recover it. 00:35:55.476 [2024-07-26 06:28:06.614474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.476 [2024-07-26 06:28:06.614507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.476 qpair failed and we were unable to recover it. 00:35:55.476 [2024-07-26 06:28:06.614662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.476 [2024-07-26 06:28:06.614713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.476 qpair failed and we were unable to recover it. 00:35:55.476 [2024-07-26 06:28:06.614893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.476 [2024-07-26 06:28:06.614929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.476 qpair failed and we were unable to recover it. 00:35:55.476 [2024-07-26 06:28:06.615081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.476 [2024-07-26 06:28:06.615114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.476 qpair failed and we were unable to recover it. 00:35:55.476 [2024-07-26 06:28:06.615252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.476 [2024-07-26 06:28:06.615307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.476 qpair failed and we were unable to recover it. 00:35:55.476 [2024-07-26 06:28:06.615488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.476 [2024-07-26 06:28:06.615525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.476 qpair failed and we were unable to recover it. 00:35:55.476 [2024-07-26 06:28:06.615728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.476 [2024-07-26 06:28:06.615761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.476 qpair failed and we were unable to recover it. 00:35:55.476 [2024-07-26 06:28:06.615891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.476 [2024-07-26 06:28:06.615924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.476 qpair failed and we were unable to recover it. 00:35:55.476 [2024-07-26 06:28:06.616088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.476 [2024-07-26 06:28:06.616122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.476 qpair failed and we were unable to recover it. 00:35:55.476 [2024-07-26 06:28:06.616281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.476 [2024-07-26 06:28:06.616314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.476 qpair failed and we were unable to recover it. 00:35:55.476 [2024-07-26 06:28:06.616576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.476 [2024-07-26 06:28:06.616634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.476 qpair failed and we were unable to recover it. 00:35:55.476 [2024-07-26 06:28:06.616809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.476 [2024-07-26 06:28:06.616845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.476 qpair failed and we were unable to recover it. 00:35:55.476 [2024-07-26 06:28:06.617026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.476 [2024-07-26 06:28:06.617068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.476 qpair failed and we were unable to recover it. 00:35:55.476 [2024-07-26 06:28:06.617227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.476 [2024-07-26 06:28:06.617261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.476 qpair failed and we were unable to recover it. 00:35:55.476 [2024-07-26 06:28:06.617395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.476 [2024-07-26 06:28:06.617428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.476 qpair failed and we were unable to recover it. 00:35:55.476 [2024-07-26 06:28:06.617583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.476 [2024-07-26 06:28:06.617615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.476 qpair failed and we were unable to recover it. 00:35:55.476 [2024-07-26 06:28:06.617745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.476 [2024-07-26 06:28:06.617777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.476 qpair failed and we were unable to recover it. 00:35:55.476 [2024-07-26 06:28:06.617932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.476 [2024-07-26 06:28:06.617964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.476 qpair failed and we were unable to recover it. 00:35:55.476 [2024-07-26 06:28:06.618091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.476 [2024-07-26 06:28:06.618133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.476 qpair failed and we were unable to recover it. 00:35:55.476 [2024-07-26 06:28:06.618301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.476 [2024-07-26 06:28:06.618336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.476 qpair failed and we were unable to recover it. 00:35:55.476 [2024-07-26 06:28:06.618493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.476 [2024-07-26 06:28:06.618525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.476 qpair failed and we were unable to recover it. 00:35:55.476 [2024-07-26 06:28:06.618681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.476 [2024-07-26 06:28:06.618714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.476 qpair failed and we were unable to recover it. 00:35:55.476 [2024-07-26 06:28:06.618916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.476 [2024-07-26 06:28:06.618953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.476 qpair failed and we were unable to recover it. 00:35:55.476 [2024-07-26 06:28:06.619134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.476 [2024-07-26 06:28:06.619172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.476 qpair failed and we were unable to recover it. 00:35:55.476 [2024-07-26 06:28:06.619349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.476 [2024-07-26 06:28:06.619382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.476 qpair failed and we were unable to recover it. 00:35:55.476 [2024-07-26 06:28:06.619556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.476 [2024-07-26 06:28:06.619592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.476 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.619794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.477 [2024-07-26 06:28:06.619830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.477 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.620018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.477 [2024-07-26 06:28:06.620050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.477 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.620237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.477 [2024-07-26 06:28:06.620273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.477 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.620448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.477 [2024-07-26 06:28:06.620484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.477 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.620675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.477 [2024-07-26 06:28:06.620708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.477 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.620839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.477 [2024-07-26 06:28:06.620888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.477 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.621109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.477 [2024-07-26 06:28:06.621143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.477 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.621302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.477 [2024-07-26 06:28:06.621337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.477 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.621549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.477 [2024-07-26 06:28:06.621610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.477 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.621781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.477 [2024-07-26 06:28:06.621813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.477 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.621996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.477 [2024-07-26 06:28:06.622033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.477 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.622224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.477 [2024-07-26 06:28:06.622256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.477 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.622443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.477 [2024-07-26 06:28:06.622478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.477 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.622663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.477 [2024-07-26 06:28:06.622696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.477 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.622824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.477 [2024-07-26 06:28:06.622856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.477 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.623036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.477 [2024-07-26 06:28:06.623096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.477 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.623249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.477 [2024-07-26 06:28:06.623281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.477 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.623407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.477 [2024-07-26 06:28:06.623440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.477 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.623603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.477 [2024-07-26 06:28:06.623652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.477 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.623830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.477 [2024-07-26 06:28:06.623863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.477 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.624039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.477 [2024-07-26 06:28:06.624084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.477 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.624258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.477 [2024-07-26 06:28:06.624294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.477 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.624492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.477 [2024-07-26 06:28:06.624524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.477 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.624749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.477 [2024-07-26 06:28:06.624809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.477 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.624990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.477 [2024-07-26 06:28:06.625023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.477 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.625192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.477 [2024-07-26 06:28:06.625225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.477 qpair failed and we were unable to recover it. 00:35:55.477 [2024-07-26 06:28:06.625386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.625418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.478 qpair failed and we were unable to recover it. 00:35:55.478 [2024-07-26 06:28:06.625592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.625629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.478 qpair failed and we were unable to recover it. 00:35:55.478 [2024-07-26 06:28:06.625789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.625821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.478 qpair failed and we were unable to recover it. 00:35:55.478 [2024-07-26 06:28:06.625978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.626011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.478 qpair failed and we were unable to recover it. 00:35:55.478 [2024-07-26 06:28:06.626174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.626207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.478 qpair failed and we were unable to recover it. 00:35:55.478 [2024-07-26 06:28:06.626362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.626394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.478 qpair failed and we were unable to recover it. 00:35:55.478 [2024-07-26 06:28:06.626531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.626569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.478 qpair failed and we were unable to recover it. 00:35:55.478 [2024-07-26 06:28:06.626697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.626752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.478 qpair failed and we were unable to recover it. 00:35:55.478 [2024-07-26 06:28:06.626940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.626976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.478 qpair failed and we were unable to recover it. 00:35:55.478 [2024-07-26 06:28:06.627212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.627261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.478 qpair failed and we were unable to recover it. 00:35:55.478 [2024-07-26 06:28:06.627481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.627520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.478 qpair failed and we were unable to recover it. 00:35:55.478 [2024-07-26 06:28:06.627709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.627742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.478 qpair failed and we were unable to recover it. 00:35:55.478 [2024-07-26 06:28:06.627879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.627912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.478 qpair failed and we were unable to recover it. 00:35:55.478 [2024-07-26 06:28:06.628098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.628132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.478 qpair failed and we were unable to recover it. 00:35:55.478 [2024-07-26 06:28:06.628351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.628384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.478 qpair failed and we were unable to recover it. 00:35:55.478 [2024-07-26 06:28:06.628617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.628678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.478 qpair failed and we were unable to recover it. 00:35:55.478 [2024-07-26 06:28:06.628854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.628890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.478 qpair failed and we were unable to recover it. 00:35:55.478 [2024-07-26 06:28:06.629037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.629075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.478 qpair failed and we were unable to recover it. 00:35:55.478 [2024-07-26 06:28:06.629280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.629316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.478 qpair failed and we were unable to recover it. 00:35:55.478 [2024-07-26 06:28:06.629486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.629522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.478 qpair failed and we were unable to recover it. 00:35:55.478 [2024-07-26 06:28:06.629690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.629723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.478 qpair failed and we were unable to recover it. 00:35:55.478 [2024-07-26 06:28:06.629884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.629932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.478 qpair failed and we were unable to recover it. 00:35:55.478 [2024-07-26 06:28:06.630105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.630141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.478 qpair failed and we were unable to recover it. 00:35:55.478 [2024-07-26 06:28:06.630321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.630353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.478 qpair failed and we were unable to recover it. 00:35:55.478 [2024-07-26 06:28:06.630524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.630560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.478 qpair failed and we were unable to recover it. 00:35:55.478 [2024-07-26 06:28:06.630741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.630774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.478 qpair failed and we were unable to recover it. 00:35:55.478 [2024-07-26 06:28:06.630900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.478 [2024-07-26 06:28:06.630931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.479 qpair failed and we were unable to recover it. 00:35:55.479 [2024-07-26 06:28:06.631087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.479 [2024-07-26 06:28:06.631120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.479 qpair failed and we were unable to recover it. 00:35:55.479 [2024-07-26 06:28:06.631304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.479 [2024-07-26 06:28:06.631340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.479 qpair failed and we were unable to recover it. 00:35:55.479 [2024-07-26 06:28:06.631516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.479 [2024-07-26 06:28:06.631548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.479 qpair failed and we were unable to recover it. 00:35:55.479 [2024-07-26 06:28:06.631795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.479 [2024-07-26 06:28:06.631852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.479 qpair failed and we were unable to recover it. 00:35:55.479 [2024-07-26 06:28:06.632048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.479 [2024-07-26 06:28:06.632106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.479 qpair failed and we were unable to recover it. 00:35:55.479 [2024-07-26 06:28:06.632236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.479 [2024-07-26 06:28:06.632269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.479 qpair failed and we were unable to recover it. 00:35:55.479 [2024-07-26 06:28:06.632442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.479 [2024-07-26 06:28:06.632508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.479 qpair failed and we were unable to recover it. 00:35:55.479 [2024-07-26 06:28:06.632692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.479 [2024-07-26 06:28:06.632733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.479 qpair failed and we were unable to recover it. 00:35:55.479 [2024-07-26 06:28:06.632893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.479 [2024-07-26 06:28:06.632928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.479 qpair failed and we were unable to recover it. 00:35:55.479 [2024-07-26 06:28:06.633095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.479 [2024-07-26 06:28:06.633148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.479 qpair failed and we were unable to recover it. 00:35:55.479 [2024-07-26 06:28:06.633328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.479 [2024-07-26 06:28:06.633365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.479 qpair failed and we were unable to recover it. 00:35:55.479 [2024-07-26 06:28:06.633539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.479 [2024-07-26 06:28:06.633572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.479 qpair failed and we were unable to recover it. 00:35:55.479 [2024-07-26 06:28:06.633775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.479 [2024-07-26 06:28:06.633811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.479 qpair failed and we were unable to recover it. 00:35:55.479 [2024-07-26 06:28:06.634009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.479 [2024-07-26 06:28:06.634042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.479 qpair failed and we were unable to recover it. 00:35:55.479 [2024-07-26 06:28:06.634210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.479 [2024-07-26 06:28:06.634253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.479 qpair failed and we were unable to recover it. 00:35:55.479 [2024-07-26 06:28:06.634434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.479 [2024-07-26 06:28:06.634470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.479 qpair failed and we were unable to recover it. 00:35:55.479 [2024-07-26 06:28:06.634620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.479 [2024-07-26 06:28:06.634656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.479 qpair failed and we were unable to recover it. 00:35:55.479 [2024-07-26 06:28:06.634838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.479 [2024-07-26 06:28:06.634870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.479 qpair failed and we were unable to recover it. 00:35:55.479 [2024-07-26 06:28:06.635036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.479 [2024-07-26 06:28:06.635079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.479 qpair failed and we were unable to recover it. 00:35:55.479 [2024-07-26 06:28:06.635263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.479 [2024-07-26 06:28:06.635303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.479 qpair failed and we were unable to recover it. 00:35:55.479 [2024-07-26 06:28:06.635519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.479 [2024-07-26 06:28:06.635552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.479 qpair failed and we were unable to recover it. 00:35:55.479 [2024-07-26 06:28:06.635814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.479 [2024-07-26 06:28:06.635851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.479 qpair failed and we were unable to recover it. 00:35:55.479 [2024-07-26 06:28:06.636000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.479 [2024-07-26 06:28:06.636036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.479 qpair failed and we were unable to recover it. 00:35:55.479 [2024-07-26 06:28:06.636249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.479 [2024-07-26 06:28:06.636281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.479 qpair failed and we were unable to recover it. 00:35:55.479 [2024-07-26 06:28:06.636544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.480 [2024-07-26 06:28:06.636598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.480 qpair failed and we were unable to recover it. 00:35:55.480 [2024-07-26 06:28:06.636808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.480 [2024-07-26 06:28:06.636847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.480 qpair failed and we were unable to recover it. 00:35:55.480 [2024-07-26 06:28:06.637039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.480 [2024-07-26 06:28:06.637080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.480 qpair failed and we were unable to recover it. 00:35:55.480 [2024-07-26 06:28:06.637272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.480 [2024-07-26 06:28:06.637308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.480 qpair failed and we were unable to recover it. 00:35:55.480 [2024-07-26 06:28:06.637481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.480 [2024-07-26 06:28:06.637517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.480 qpair failed and we were unable to recover it. 00:35:55.480 [2024-07-26 06:28:06.637733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.480 [2024-07-26 06:28:06.637765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.480 qpair failed and we were unable to recover it. 00:35:55.480 [2024-07-26 06:28:06.638019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.480 [2024-07-26 06:28:06.638085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.480 qpair failed and we were unable to recover it. 00:35:55.480 [2024-07-26 06:28:06.638261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.480 [2024-07-26 06:28:06.638296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.480 qpair failed and we were unable to recover it. 00:35:55.480 [2024-07-26 06:28:06.638502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.480 [2024-07-26 06:28:06.638535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.480 qpair failed and we were unable to recover it. 00:35:55.480 [2024-07-26 06:28:06.638715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.480 [2024-07-26 06:28:06.638775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.480 qpair failed and we were unable to recover it. 00:35:55.480 [2024-07-26 06:28:06.638985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.480 [2024-07-26 06:28:06.639020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.480 qpair failed and we were unable to recover it. 00:35:55.480 [2024-07-26 06:28:06.639206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.480 [2024-07-26 06:28:06.639239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.480 qpair failed and we were unable to recover it. 00:35:55.480 [2024-07-26 06:28:06.639440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.480 [2024-07-26 06:28:06.639494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.480 qpair failed and we were unable to recover it. 00:35:55.480 [2024-07-26 06:28:06.639671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.480 [2024-07-26 06:28:06.639708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.480 qpair failed and we were unable to recover it. 00:35:55.480 [2024-07-26 06:28:06.639893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.480 [2024-07-26 06:28:06.639927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.480 qpair failed and we were unable to recover it. 00:35:55.480 [2024-07-26 06:28:06.640139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.480 [2024-07-26 06:28:06.640177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.480 qpair failed and we were unable to recover it. 00:35:55.480 [2024-07-26 06:28:06.640320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.480 [2024-07-26 06:28:06.640356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.480 qpair failed and we were unable to recover it. 00:35:55.480 [2024-07-26 06:28:06.640539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.480 [2024-07-26 06:28:06.640572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.480 qpair failed and we were unable to recover it. 00:35:55.480 [2024-07-26 06:28:06.640735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.480 [2024-07-26 06:28:06.640774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.480 qpair failed and we were unable to recover it. 00:35:55.480 [2024-07-26 06:28:06.640917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.480 [2024-07-26 06:28:06.640953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.480 qpair failed and we were unable to recover it. 00:35:55.480 [2024-07-26 06:28:06.641164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.480 [2024-07-26 06:28:06.641197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.480 qpair failed and we were unable to recover it. 00:35:55.480 [2024-07-26 06:28:06.641392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.481 [2024-07-26 06:28:06.641447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.481 qpair failed and we were unable to recover it. 00:35:55.481 [2024-07-26 06:28:06.641625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.481 [2024-07-26 06:28:06.641661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.481 qpair failed and we were unable to recover it. 00:35:55.481 [2024-07-26 06:28:06.641818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.481 [2024-07-26 06:28:06.641850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.481 qpair failed and we were unable to recover it. 00:35:55.481 [2024-07-26 06:28:06.642009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.481 [2024-07-26 06:28:06.642043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.481 qpair failed and we were unable to recover it. 00:35:55.481 [2024-07-26 06:28:06.642253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.481 [2024-07-26 06:28:06.642287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.481 qpair failed and we were unable to recover it. 00:35:55.481 [2024-07-26 06:28:06.642473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.481 [2024-07-26 06:28:06.642506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.481 qpair failed and we were unable to recover it. 00:35:55.481 [2024-07-26 06:28:06.642629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.481 [2024-07-26 06:28:06.642662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.481 qpair failed and we were unable to recover it. 00:35:55.481 [2024-07-26 06:28:06.642879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.481 [2024-07-26 06:28:06.642917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.481 qpair failed and we were unable to recover it. 00:35:55.481 [2024-07-26 06:28:06.643079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.481 [2024-07-26 06:28:06.643112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.481 qpair failed and we were unable to recover it. 00:35:55.481 [2024-07-26 06:28:06.643276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.481 [2024-07-26 06:28:06.643309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.481 qpair failed and we were unable to recover it. 00:35:55.481 [2024-07-26 06:28:06.643476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.481 [2024-07-26 06:28:06.643514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.481 qpair failed and we were unable to recover it. 00:35:55.481 [2024-07-26 06:28:06.643689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.481 [2024-07-26 06:28:06.643722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.481 qpair failed and we were unable to recover it. 00:35:55.481 [2024-07-26 06:28:06.643863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.481 [2024-07-26 06:28:06.643900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.481 qpair failed and we were unable to recover it. 00:35:55.481 [2024-07-26 06:28:06.644041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.481 [2024-07-26 06:28:06.644095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.481 qpair failed and we were unable to recover it. 00:35:55.481 [2024-07-26 06:28:06.644278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.481 [2024-07-26 06:28:06.644311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.481 qpair failed and we were unable to recover it. 00:35:55.481 [2024-07-26 06:28:06.644481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.481 [2024-07-26 06:28:06.644539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.481 qpair failed and we were unable to recover it. 00:35:55.481 [2024-07-26 06:28:06.644695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.481 [2024-07-26 06:28:06.644732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.481 qpair failed and we were unable to recover it. 00:35:55.481 [2024-07-26 06:28:06.644935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.481 [2024-07-26 06:28:06.644971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.481 qpair failed and we were unable to recover it. 00:35:55.481 [2024-07-26 06:28:06.645152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.481 [2024-07-26 06:28:06.645188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.481 qpair failed and we were unable to recover it. 00:35:55.481 [2024-07-26 06:28:06.645365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.481 [2024-07-26 06:28:06.645402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.481 qpair failed and we were unable to recover it. 00:35:55.481 [2024-07-26 06:28:06.645586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.481 [2024-07-26 06:28:06.645619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.481 qpair failed and we were unable to recover it. 00:35:55.481 [2024-07-26 06:28:06.645894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.481 [2024-07-26 06:28:06.645953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.481 qpair failed and we were unable to recover it. 00:35:55.481 [2024-07-26 06:28:06.646138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.481 [2024-07-26 06:28:06.646174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.481 qpair failed and we were unable to recover it. 00:35:55.481 [2024-07-26 06:28:06.646380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.481 [2024-07-26 06:28:06.646412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.481 qpair failed and we were unable to recover it. 00:35:55.481 [2024-07-26 06:28:06.646630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.481 [2024-07-26 06:28:06.646699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.481 qpair failed and we were unable to recover it. 00:35:55.481 [2024-07-26 06:28:06.646897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.481 [2024-07-26 06:28:06.646933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.481 qpair failed and we were unable to recover it. 00:35:55.481 [2024-07-26 06:28:06.647088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.647121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.482 qpair failed and we were unable to recover it. 00:35:55.482 [2024-07-26 06:28:06.647255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.647287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.482 qpair failed and we were unable to recover it. 00:35:55.482 [2024-07-26 06:28:06.647470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.647506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.482 qpair failed and we were unable to recover it. 00:35:55.482 [2024-07-26 06:28:06.647650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.647682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.482 qpair failed and we were unable to recover it. 00:35:55.482 [2024-07-26 06:28:06.647818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.647869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.482 qpair failed and we were unable to recover it. 00:35:55.482 [2024-07-26 06:28:06.648023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.648065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.482 qpair failed and we were unable to recover it. 00:35:55.482 [2024-07-26 06:28:06.648223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.648255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.482 qpair failed and we were unable to recover it. 00:35:55.482 [2024-07-26 06:28:06.648384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.648433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.482 qpair failed and we were unable to recover it. 00:35:55.482 [2024-07-26 06:28:06.648606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.648641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.482 qpair failed and we were unable to recover it. 00:35:55.482 [2024-07-26 06:28:06.648821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.648853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.482 qpair failed and we were unable to recover it. 00:35:55.482 [2024-07-26 06:28:06.649031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.649076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.482 qpair failed and we were unable to recover it. 00:35:55.482 [2024-07-26 06:28:06.649221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.649254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.482 qpair failed and we were unable to recover it. 00:35:55.482 [2024-07-26 06:28:06.649383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.649415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.482 qpair failed and we were unable to recover it. 00:35:55.482 [2024-07-26 06:28:06.649652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.649707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.482 qpair failed and we were unable to recover it. 00:35:55.482 [2024-07-26 06:28:06.649914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.649949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.482 qpair failed and we were unable to recover it. 00:35:55.482 [2024-07-26 06:28:06.650123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.650160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.482 qpair failed and we were unable to recover it. 00:35:55.482 [2024-07-26 06:28:06.650372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.650424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.482 qpair failed and we were unable to recover it. 00:35:55.482 [2024-07-26 06:28:06.650583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.650623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.482 qpair failed and we were unable to recover it. 00:35:55.482 [2024-07-26 06:28:06.650803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.650837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.482 qpair failed and we were unable to recover it. 00:35:55.482 [2024-07-26 06:28:06.650987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.651025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.482 qpair failed and we were unable to recover it. 00:35:55.482 [2024-07-26 06:28:06.651256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.651293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.482 qpair failed and we were unable to recover it. 00:35:55.482 [2024-07-26 06:28:06.651443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.651476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.482 qpair failed and we were unable to recover it. 00:35:55.482 [2024-07-26 06:28:06.651635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.651669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.482 qpair failed and we were unable to recover it. 00:35:55.482 [2024-07-26 06:28:06.651870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.651907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.482 qpair failed and we were unable to recover it. 00:35:55.482 [2024-07-26 06:28:06.652069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.652103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.482 qpair failed and we were unable to recover it. 00:35:55.482 [2024-07-26 06:28:06.652285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.652323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.482 qpair failed and we were unable to recover it. 00:35:55.482 [2024-07-26 06:28:06.652535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.482 [2024-07-26 06:28:06.652572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.483 [2024-07-26 06:28:06.652723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.483 [2024-07-26 06:28:06.652765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.483 [2024-07-26 06:28:06.652902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.483 [2024-07-26 06:28:06.652954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.483 [2024-07-26 06:28:06.653131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.483 [2024-07-26 06:28:06.653167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.483 [2024-07-26 06:28:06.653329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.483 [2024-07-26 06:28:06.653361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.483 [2024-07-26 06:28:06.653577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.483 [2024-07-26 06:28:06.653613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.483 [2024-07-26 06:28:06.653791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.483 [2024-07-26 06:28:06.653826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.483 [2024-07-26 06:28:06.653989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.483 [2024-07-26 06:28:06.654021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.483 [2024-07-26 06:28:06.654211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.483 [2024-07-26 06:28:06.654244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.483 [2024-07-26 06:28:06.654431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.483 [2024-07-26 06:28:06.654467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.483 [2024-07-26 06:28:06.654622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.483 [2024-07-26 06:28:06.654654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.483 [2024-07-26 06:28:06.654831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.483 [2024-07-26 06:28:06.654867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.483 [2024-07-26 06:28:06.655044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.483 [2024-07-26 06:28:06.655088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.483 [2024-07-26 06:28:06.655241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.483 [2024-07-26 06:28:06.655273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.483 [2024-07-26 06:28:06.655477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.483 [2024-07-26 06:28:06.655540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.483 [2024-07-26 06:28:06.655725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.483 [2024-07-26 06:28:06.655758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.483 [2024-07-26 06:28:06.655951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.483 [2024-07-26 06:28:06.655983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.483 [2024-07-26 06:28:06.656186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.483 [2024-07-26 06:28:06.656239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.483 [2024-07-26 06:28:06.656417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.483 [2024-07-26 06:28:06.656456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.483 [2024-07-26 06:28:06.656642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.483 [2024-07-26 06:28:06.656676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.483 [2024-07-26 06:28:06.656805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.483 [2024-07-26 06:28:06.656840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.483 [2024-07-26 06:28:06.657053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.483 [2024-07-26 06:28:06.657105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.483 [2024-07-26 06:28:06.657287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.483 [2024-07-26 06:28:06.657319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.483 [2024-07-26 06:28:06.657523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.483 [2024-07-26 06:28:06.657560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.483 [2024-07-26 06:28:06.657760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.483 [2024-07-26 06:28:06.657805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.483 [2024-07-26 06:28:06.657999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.483 [2024-07-26 06:28:06.658032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.483 [2024-07-26 06:28:06.658216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.483 [2024-07-26 06:28:06.658255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.483 qpair failed and we were unable to recover it. 00:35:55.484 [2024-07-26 06:28:06.658411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.484 [2024-07-26 06:28:06.658444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.484 qpair failed and we were unable to recover it. 00:35:55.484 [2024-07-26 06:28:06.658605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.484 [2024-07-26 06:28:06.658639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.484 qpair failed and we were unable to recover it. 00:35:55.484 [2024-07-26 06:28:06.658867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.484 [2024-07-26 06:28:06.658929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.484 qpair failed and we were unable to recover it. 00:35:55.484 [2024-07-26 06:28:06.659145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.484 [2024-07-26 06:28:06.659179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.484 qpair failed and we were unable to recover it. 00:35:55.484 [2024-07-26 06:28:06.659312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.484 [2024-07-26 06:28:06.659344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.484 qpair failed and we were unable to recover it. 00:35:55.484 [2024-07-26 06:28:06.659479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.484 [2024-07-26 06:28:06.659530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.484 qpair failed and we were unable to recover it. 00:35:55.484 [2024-07-26 06:28:06.659680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.484 [2024-07-26 06:28:06.659715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.484 qpair failed and we were unable to recover it. 00:35:55.484 [2024-07-26 06:28:06.659864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.484 [2024-07-26 06:28:06.659898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.484 qpair failed and we were unable to recover it. 00:35:55.484 [2024-07-26 06:28:06.660101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.484 [2024-07-26 06:28:06.660149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.484 qpair failed and we were unable to recover it. 00:35:55.484 [2024-07-26 06:28:06.660342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.484 [2024-07-26 06:28:06.660381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.484 qpair failed and we were unable to recover it. 00:35:55.484 [2024-07-26 06:28:06.660568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.484 [2024-07-26 06:28:06.660602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.484 qpair failed and we were unable to recover it. 00:35:55.484 [2024-07-26 06:28:06.660771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.484 [2024-07-26 06:28:06.660833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.484 qpair failed and we were unable to recover it. 00:35:55.484 [2024-07-26 06:28:06.660986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.484 [2024-07-26 06:28:06.661024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.484 qpair failed and we were unable to recover it. 00:35:55.484 [2024-07-26 06:28:06.661238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.484 [2024-07-26 06:28:06.661271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.484 qpair failed and we were unable to recover it. 00:35:55.484 [2024-07-26 06:28:06.661524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.484 [2024-07-26 06:28:06.661579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.484 qpair failed and we were unable to recover it. 00:35:55.484 [2024-07-26 06:28:06.661754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.484 [2024-07-26 06:28:06.661791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.484 qpair failed and we were unable to recover it. 00:35:55.484 [2024-07-26 06:28:06.662009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.484 [2024-07-26 06:28:06.662042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.484 qpair failed and we were unable to recover it. 00:35:55.484 [2024-07-26 06:28:06.662186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.484 [2024-07-26 06:28:06.662220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.484 qpair failed and we were unable to recover it. 00:35:55.484 [2024-07-26 06:28:06.662416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.484 [2024-07-26 06:28:06.662470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.484 qpair failed and we were unable to recover it. 00:35:55.484 [2024-07-26 06:28:06.662639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.484 [2024-07-26 06:28:06.662671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.484 qpair failed and we were unable to recover it. 00:35:55.484 [2024-07-26 06:28:06.662816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.484 [2024-07-26 06:28:06.662866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.484 qpair failed and we were unable to recover it. 00:35:55.484 [2024-07-26 06:28:06.663038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.484 [2024-07-26 06:28:06.663080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.484 qpair failed and we were unable to recover it. 00:35:55.484 [2024-07-26 06:28:06.663233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.484 [2024-07-26 06:28:06.663266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.484 qpair failed and we were unable to recover it. 00:35:55.484 [2024-07-26 06:28:06.663399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.484 [2024-07-26 06:28:06.663448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.484 qpair failed and we were unable to recover it. 00:35:55.484 [2024-07-26 06:28:06.663629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.484 [2024-07-26 06:28:06.663665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.484 qpair failed and we were unable to recover it. 00:35:55.484 [2024-07-26 06:28:06.663855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.484 [2024-07-26 06:28:06.663887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.484 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.664020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.485 [2024-07-26 06:28:06.664083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.485 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.664274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.485 [2024-07-26 06:28:06.664307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.485 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.664467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.485 [2024-07-26 06:28:06.664500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.485 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.664760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.485 [2024-07-26 06:28:06.664817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.485 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.664994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.485 [2024-07-26 06:28:06.665030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.485 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.665201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.485 [2024-07-26 06:28:06.665236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.485 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.665390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.485 [2024-07-26 06:28:06.665458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.485 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.665600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.485 [2024-07-26 06:28:06.665635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.485 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.665809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.485 [2024-07-26 06:28:06.665842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.485 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.665975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.485 [2024-07-26 06:28:06.666008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.485 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.666182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.485 [2024-07-26 06:28:06.666215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.485 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.666351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.485 [2024-07-26 06:28:06.666384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.485 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.666538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.485 [2024-07-26 06:28:06.666588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.485 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.666797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.485 [2024-07-26 06:28:06.666833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.485 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.667010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.485 [2024-07-26 06:28:06.667042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.485 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.667205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.485 [2024-07-26 06:28:06.667282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.485 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.667498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.485 [2024-07-26 06:28:06.667543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.485 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.667730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.485 [2024-07-26 06:28:06.667765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.485 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.667980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.485 [2024-07-26 06:28:06.668017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.485 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.668172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.485 [2024-07-26 06:28:06.668205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.485 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.668348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.485 [2024-07-26 06:28:06.668382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.485 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.668555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.485 [2024-07-26 06:28:06.668608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.485 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.668783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.485 [2024-07-26 06:28:06.668819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.485 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.668996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.485 [2024-07-26 06:28:06.669029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.485 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.669229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.485 [2024-07-26 06:28:06.669262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.485 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.669418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.485 [2024-07-26 06:28:06.669454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.485 qpair failed and we were unable to recover it. 00:35:55.485 [2024-07-26 06:28:06.669607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.486 [2024-07-26 06:28:06.669640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.486 qpair failed and we were unable to recover it. 00:35:55.486 [2024-07-26 06:28:06.669875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.486 [2024-07-26 06:28:06.669933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.486 qpair failed and we were unable to recover it. 00:35:55.486 [2024-07-26 06:28:06.670118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.486 [2024-07-26 06:28:06.670155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.486 qpair failed and we were unable to recover it. 00:35:55.486 [2024-07-26 06:28:06.670337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.486 [2024-07-26 06:28:06.670370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.486 qpair failed and we were unable to recover it. 00:35:55.486 [2024-07-26 06:28:06.670541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.486 [2024-07-26 06:28:06.670573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.486 qpair failed and we were unable to recover it. 00:35:55.486 [2024-07-26 06:28:06.670783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.486 [2024-07-26 06:28:06.670819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.486 qpair failed and we were unable to recover it. 00:35:55.486 [2024-07-26 06:28:06.670980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.486 [2024-07-26 06:28:06.671013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.486 qpair failed and we were unable to recover it. 00:35:55.486 [2024-07-26 06:28:06.671144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.486 [2024-07-26 06:28:06.671178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.486 qpair failed and we were unable to recover it. 00:35:55.486 [2024-07-26 06:28:06.671335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.486 [2024-07-26 06:28:06.671368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.486 qpair failed and we were unable to recover it. 00:35:55.486 [2024-07-26 06:28:06.671525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.486 [2024-07-26 06:28:06.671557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.486 qpair failed and we were unable to recover it. 00:35:55.486 [2024-07-26 06:28:06.671760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.486 [2024-07-26 06:28:06.671796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.486 qpair failed and we were unable to recover it. 00:35:55.486 [2024-07-26 06:28:06.671954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.486 [2024-07-26 06:28:06.671986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.486 qpair failed and we were unable to recover it. 00:35:55.486 [2024-07-26 06:28:06.672148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.486 [2024-07-26 06:28:06.672180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.486 qpair failed and we were unable to recover it. 00:35:55.486 [2024-07-26 06:28:06.672356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.486 [2024-07-26 06:28:06.672392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.486 qpair failed and we were unable to recover it. 00:35:55.486 [2024-07-26 06:28:06.672602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.486 [2024-07-26 06:28:06.672634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.486 qpair failed and we were unable to recover it. 00:35:55.486 [2024-07-26 06:28:06.672849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.486 [2024-07-26 06:28:06.672882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.486 qpair failed and we were unable to recover it. 00:35:55.486 [2024-07-26 06:28:06.673041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.486 [2024-07-26 06:28:06.673096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.486 qpair failed and we were unable to recover it. 00:35:55.486 [2024-07-26 06:28:06.673282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.486 [2024-07-26 06:28:06.673318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.486 qpair failed and we were unable to recover it. 00:35:55.486 [2024-07-26 06:28:06.673471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.486 [2024-07-26 06:28:06.673504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.486 qpair failed and we were unable to recover it. 00:35:55.486 [2024-07-26 06:28:06.673640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.486 [2024-07-26 06:28:06.673672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.486 qpair failed and we were unable to recover it. 00:35:55.487 [2024-07-26 06:28:06.673836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.487 [2024-07-26 06:28:06.673868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.487 qpair failed and we were unable to recover it. 00:35:55.487 [2024-07-26 06:28:06.673992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.487 [2024-07-26 06:28:06.674025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.487 qpair failed and we were unable to recover it. 00:35:55.487 [2024-07-26 06:28:06.674203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.487 [2024-07-26 06:28:06.674270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.487 qpair failed and we were unable to recover it. 00:35:55.487 [2024-07-26 06:28:06.674463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.487 [2024-07-26 06:28:06.674499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.487 qpair failed and we were unable to recover it. 00:35:55.487 [2024-07-26 06:28:06.674695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.487 [2024-07-26 06:28:06.674729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.487 qpair failed and we were unable to recover it. 00:35:55.487 [2024-07-26 06:28:06.674903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.487 [2024-07-26 06:28:06.674939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.487 qpair failed and we were unable to recover it. 00:35:55.487 [2024-07-26 06:28:06.675099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.487 [2024-07-26 06:28:06.675138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.487 qpair failed and we were unable to recover it. 00:35:55.487 [2024-07-26 06:28:06.675319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.487 [2024-07-26 06:28:06.675352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.487 qpair failed and we were unable to recover it. 00:35:55.487 [2024-07-26 06:28:06.675526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.487 [2024-07-26 06:28:06.675562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.487 qpair failed and we were unable to recover it. 00:35:55.487 [2024-07-26 06:28:06.675731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.487 [2024-07-26 06:28:06.675768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.487 qpair failed and we were unable to recover it. 00:35:55.487 [2024-07-26 06:28:06.675936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.487 [2024-07-26 06:28:06.675974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.487 qpair failed and we were unable to recover it. 00:35:55.487 [2024-07-26 06:28:06.676152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.487 [2024-07-26 06:28:06.676190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.487 qpair failed and we were unable to recover it. 00:35:55.487 [2024-07-26 06:28:06.676363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.487 [2024-07-26 06:28:06.676399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.487 qpair failed and we were unable to recover it. 00:35:55.487 [2024-07-26 06:28:06.676590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.487 [2024-07-26 06:28:06.676624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.487 qpair failed and we were unable to recover it. 00:35:55.487 [2024-07-26 06:28:06.676765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.487 [2024-07-26 06:28:06.676809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.487 qpair failed and we were unable to recover it. 00:35:55.487 [2024-07-26 06:28:06.676962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.487 [2024-07-26 06:28:06.676995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.487 qpair failed and we were unable to recover it. 00:35:55.487 [2024-07-26 06:28:06.677123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.487 [2024-07-26 06:28:06.677155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.487 qpair failed and we were unable to recover it. 00:35:55.487 [2024-07-26 06:28:06.677313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.487 [2024-07-26 06:28:06.677364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.487 qpair failed and we were unable to recover it. 00:35:55.487 [2024-07-26 06:28:06.677566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.487 [2024-07-26 06:28:06.677603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.487 qpair failed and we were unable to recover it. 00:35:55.487 [2024-07-26 06:28:06.677797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.487 [2024-07-26 06:28:06.677830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.487 qpair failed and we were unable to recover it. 00:35:55.487 [2024-07-26 06:28:06.678011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.487 [2024-07-26 06:28:06.678050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.487 qpair failed and we were unable to recover it. 00:35:55.487 [2024-07-26 06:28:06.678230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.487 [2024-07-26 06:28:06.678267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.487 qpair failed and we were unable to recover it. 00:35:55.487 [2024-07-26 06:28:06.678447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.487 [2024-07-26 06:28:06.678488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.487 qpair failed and we were unable to recover it. 00:35:55.487 [2024-07-26 06:28:06.678617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.487 [2024-07-26 06:28:06.678650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.487 qpair failed and we were unable to recover it. 00:35:55.487 [2024-07-26 06:28:06.678812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.487 [2024-07-26 06:28:06.678862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.487 qpair failed and we were unable to recover it. 00:35:55.487 [2024-07-26 06:28:06.679070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.487 [2024-07-26 06:28:06.679102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.487 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.679301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.488 [2024-07-26 06:28:06.679337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.488 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.679482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.488 [2024-07-26 06:28:06.679518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.488 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.679658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.488 [2024-07-26 06:28:06.679690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.488 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.679853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.488 [2024-07-26 06:28:06.679885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.488 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.680033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.488 [2024-07-26 06:28:06.680080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.488 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.680246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.488 [2024-07-26 06:28:06.680278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.488 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.680410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.488 [2024-07-26 06:28:06.680443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.488 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.680641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.488 [2024-07-26 06:28:06.680677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.488 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.680858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.488 [2024-07-26 06:28:06.680890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.488 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.681032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.488 [2024-07-26 06:28:06.681077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.488 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.681222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.488 [2024-07-26 06:28:06.681260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.488 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.681448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.488 [2024-07-26 06:28:06.681486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.488 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.681746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.488 [2024-07-26 06:28:06.681806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.488 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.681981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.488 [2024-07-26 06:28:06.682016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.488 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.682195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.488 [2024-07-26 06:28:06.682227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.488 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.682364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.488 [2024-07-26 06:28:06.682400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.488 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.682562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.488 [2024-07-26 06:28:06.682596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.488 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.682778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.488 [2024-07-26 06:28:06.682813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.488 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.682999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.488 [2024-07-26 06:28:06.683035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.488 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.683189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.488 [2024-07-26 06:28:06.683225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.488 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.683399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.488 [2024-07-26 06:28:06.683432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.488 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.683613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.488 [2024-07-26 06:28:06.683650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.488 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.683791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.488 [2024-07-26 06:28:06.683827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.488 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.683994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.488 [2024-07-26 06:28:06.684027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.488 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.684166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.488 [2024-07-26 06:28:06.684203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.488 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.684360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.488 [2024-07-26 06:28:06.684410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.488 qpair failed and we were unable to recover it. 00:35:55.488 [2024-07-26 06:28:06.684592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.684624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.489 [2024-07-26 06:28:06.684827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.684863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.489 [2024-07-26 06:28:06.685019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.685052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.489 [2024-07-26 06:28:06.685191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.685225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.489 [2024-07-26 06:28:06.685384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.685418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.489 [2024-07-26 06:28:06.685588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.685620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.489 [2024-07-26 06:28:06.685748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.685780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.489 [2024-07-26 06:28:06.685958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.685994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.489 [2024-07-26 06:28:06.686192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.686229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.489 [2024-07-26 06:28:06.686412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.686445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.489 [2024-07-26 06:28:06.686574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.686606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.489 [2024-07-26 06:28:06.686796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.686829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.489 [2024-07-26 06:28:06.687039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.687079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.489 [2024-07-26 06:28:06.687208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.687241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.489 [2024-07-26 06:28:06.687475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.687507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.489 [2024-07-26 06:28:06.687641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.687674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.489 [2024-07-26 06:28:06.687801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.687852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.489 [2024-07-26 06:28:06.688028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.688076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.489 [2024-07-26 06:28:06.688250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.688283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.489 [2024-07-26 06:28:06.688418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.688451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.489 [2024-07-26 06:28:06.688651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.688687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.489 [2024-07-26 06:28:06.688865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.688898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.489 [2024-07-26 06:28:06.689077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.689114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.489 [2024-07-26 06:28:06.689283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.689318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.489 [2024-07-26 06:28:06.689468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.689501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.489 [2024-07-26 06:28:06.689680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.689731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.489 [2024-07-26 06:28:06.689901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.489 [2024-07-26 06:28:06.689947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.489 qpair failed and we were unable to recover it. 00:35:55.490 [2024-07-26 06:28:06.690152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.490 [2024-07-26 06:28:06.690185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.490 qpair failed and we were unable to recover it. 00:35:55.490 [2024-07-26 06:28:06.690344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.490 [2024-07-26 06:28:06.690376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.490 qpair failed and we were unable to recover it. 00:35:55.490 [2024-07-26 06:28:06.690589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.490 [2024-07-26 06:28:06.690625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.490 qpair failed and we were unable to recover it. 00:35:55.490 [2024-07-26 06:28:06.690808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.490 [2024-07-26 06:28:06.690840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.490 qpair failed and we were unable to recover it. 00:35:55.490 [2024-07-26 06:28:06.690993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.490 [2024-07-26 06:28:06.691028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.490 qpair failed and we were unable to recover it. 00:35:55.490 [2024-07-26 06:28:06.691193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.490 [2024-07-26 06:28:06.691225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.490 qpair failed and we were unable to recover it. 00:35:55.490 [2024-07-26 06:28:06.691361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.490 [2024-07-26 06:28:06.691393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.490 qpair failed and we were unable to recover it. 00:35:55.490 [2024-07-26 06:28:06.691544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.490 [2024-07-26 06:28:06.691576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.490 qpair failed and we were unable to recover it. 00:35:55.490 [2024-07-26 06:28:06.691758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.490 [2024-07-26 06:28:06.691794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.490 qpair failed and we were unable to recover it. 00:35:55.490 [2024-07-26 06:28:06.691976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.490 [2024-07-26 06:28:06.692008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.490 qpair failed and we were unable to recover it. 00:35:55.490 [2024-07-26 06:28:06.692197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.490 [2024-07-26 06:28:06.692233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.490 qpair failed and we were unable to recover it. 00:35:55.490 [2024-07-26 06:28:06.692377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.490 [2024-07-26 06:28:06.692420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.490 qpair failed and we were unable to recover it. 00:35:55.490 [2024-07-26 06:28:06.692571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.490 [2024-07-26 06:28:06.692603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.490 qpair failed and we were unable to recover it. 00:35:55.490 [2024-07-26 06:28:06.692733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.490 [2024-07-26 06:28:06.692766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.490 qpair failed and we were unable to recover it. 00:35:55.490 [2024-07-26 06:28:06.692930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.490 [2024-07-26 06:28:06.692979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.490 qpair failed and we were unable to recover it. 00:35:55.490 [2024-07-26 06:28:06.693188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.490 [2024-07-26 06:28:06.693221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.490 qpair failed and we were unable to recover it. 00:35:55.490 [2024-07-26 06:28:06.693376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.490 [2024-07-26 06:28:06.693412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.490 qpair failed and we were unable to recover it. 00:35:55.490 [2024-07-26 06:28:06.693600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.490 [2024-07-26 06:28:06.693633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.490 qpair failed and we were unable to recover it. 00:35:55.490 [2024-07-26 06:28:06.693789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.490 [2024-07-26 06:28:06.693821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.490 qpair failed and we were unable to recover it. 00:35:55.490 [2024-07-26 06:28:06.693980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.490 [2024-07-26 06:28:06.694013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.490 qpair failed and we were unable to recover it. 00:35:55.490 [2024-07-26 06:28:06.694153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.490 [2024-07-26 06:28:06.694185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.490 qpair failed and we were unable to recover it. 00:35:55.490 [2024-07-26 06:28:06.694312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.490 [2024-07-26 06:28:06.694345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.490 qpair failed and we were unable to recover it. 00:35:55.490 [2024-07-26 06:28:06.694545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.490 [2024-07-26 06:28:06.694582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.490 qpair failed and we were unable to recover it. 00:35:55.490 [2024-07-26 06:28:06.694760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.490 [2024-07-26 06:28:06.694795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.490 qpair failed and we were unable to recover it. 00:35:55.490 [2024-07-26 06:28:06.694967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.490 [2024-07-26 06:28:06.694999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.490 qpair failed and we were unable to recover it. 00:35:55.490 [2024-07-26 06:28:06.695150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.490 [2024-07-26 06:28:06.695183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.490 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.695342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.695374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.695576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.695608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.695753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.695789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.695971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.696004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.696168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.696201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.696412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.696445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.696571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.696604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.696780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.696812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.696970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.697007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.697193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.697226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.697408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.697440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.697624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.697660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.697866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.697901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.698053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.698093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.698235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.698268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.698429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.698477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.698698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.698730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.698878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.698914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.699087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.699124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.699304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.699337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.699524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.699560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.699738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.699774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.699955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.699988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.700174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.700210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.700410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.700445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.700636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.700673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.700817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.700849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.491 qpair failed and we were unable to recover it. 00:35:55.491 [2024-07-26 06:28:06.701003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.491 [2024-07-26 06:28:06.701035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.701223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.701256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.701384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.701417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.701571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.701604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.701732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.701765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.701968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.702004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.702189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.702222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.702374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.702406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.702561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.702593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.702766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.702819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.702974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.703016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.703193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.703226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.703408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.703444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.703635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.703667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.703791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.703823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.704028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.704072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.704282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.704313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.704496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.704532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.704698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.704733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.704914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.704946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.705103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.705137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.705338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.705374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.705581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.705613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.705793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.705829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.706029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.706070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.706226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.706259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.706432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.706468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.706669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.706704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.492 [2024-07-26 06:28:06.706855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.492 [2024-07-26 06:28:06.706889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.492 qpair failed and we were unable to recover it. 00:35:55.493 [2024-07-26 06:28:06.707094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.493 [2024-07-26 06:28:06.707131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.493 qpair failed and we were unable to recover it. 00:35:55.493 [2024-07-26 06:28:06.707339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.493 [2024-07-26 06:28:06.707375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.493 qpair failed and we were unable to recover it. 00:35:55.493 [2024-07-26 06:28:06.707558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.493 [2024-07-26 06:28:06.707590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.493 qpair failed and we were unable to recover it. 00:35:55.493 [2024-07-26 06:28:06.707775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.493 [2024-07-26 06:28:06.707811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.493 qpair failed and we were unable to recover it. 00:35:55.493 [2024-07-26 06:28:06.708012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.493 [2024-07-26 06:28:06.708044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.493 qpair failed and we were unable to recover it. 00:35:55.493 [2024-07-26 06:28:06.708241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.493 [2024-07-26 06:28:06.708274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.493 qpair failed and we were unable to recover it. 00:35:55.493 [2024-07-26 06:28:06.708443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.493 [2024-07-26 06:28:06.708479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.493 qpair failed and we were unable to recover it. 00:35:55.493 [2024-07-26 06:28:06.708618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.493 [2024-07-26 06:28:06.708669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.493 qpair failed and we were unable to recover it. 00:35:55.493 [2024-07-26 06:28:06.708854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.493 [2024-07-26 06:28:06.708886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.493 qpair failed and we were unable to recover it. 00:35:55.493 [2024-07-26 06:28:06.709104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.493 [2024-07-26 06:28:06.709142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.493 qpair failed and we were unable to recover it. 00:35:55.493 [2024-07-26 06:28:06.709299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.493 [2024-07-26 06:28:06.709333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.493 qpair failed and we were unable to recover it. 00:35:55.493 [2024-07-26 06:28:06.709524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.493 [2024-07-26 06:28:06.709556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.493 qpair failed and we were unable to recover it. 00:35:55.493 [2024-07-26 06:28:06.709729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.493 [2024-07-26 06:28:06.709765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.493 qpair failed and we were unable to recover it. 00:35:55.493 [2024-07-26 06:28:06.709908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.493 [2024-07-26 06:28:06.709945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.493 qpair failed and we were unable to recover it. 00:35:55.493 [2024-07-26 06:28:06.710094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.493 [2024-07-26 06:28:06.710127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.493 qpair failed and we were unable to recover it. 00:35:55.493 [2024-07-26 06:28:06.710328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.493 [2024-07-26 06:28:06.710363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.493 qpair failed and we were unable to recover it. 00:35:55.493 [2024-07-26 06:28:06.710564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.493 [2024-07-26 06:28:06.710600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.493 qpair failed and we were unable to recover it. 00:35:55.493 [2024-07-26 06:28:06.710756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.493 [2024-07-26 06:28:06.710788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.493 qpair failed and we were unable to recover it. 00:35:55.493 [2024-07-26 06:28:06.710962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.493 [2024-07-26 06:28:06.710997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.493 qpair failed and we were unable to recover it. 00:35:55.493 [2024-07-26 06:28:06.711194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.493 [2024-07-26 06:28:06.711227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.493 qpair failed and we were unable to recover it. 00:35:55.493 [2024-07-26 06:28:06.711388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.493 [2024-07-26 06:28:06.711420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.493 qpair failed and we were unable to recover it. 00:35:55.493 [2024-07-26 06:28:06.711584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.493 [2024-07-26 06:28:06.711615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.711756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.711788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.711975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.712007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.712194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.712230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.712405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.712441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.712654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.712686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.712877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.712913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.713124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.713157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.713284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.713317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.713472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.713504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.713676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.713712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.713863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.713896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.714063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.714096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.714254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.714286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.714477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.714509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.714719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.714755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.714986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.715018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.715188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.715222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.715423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.715459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.715635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.715671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.715878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.715910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.716068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.716105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.716267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.716302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.716527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.716560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.716742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.716789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.716960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.716995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.717194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.717227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.717411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.717447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.717642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.717683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.717848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.717880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.494 qpair failed and we were unable to recover it. 00:35:55.494 [2024-07-26 06:28:06.718036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.494 [2024-07-26 06:28:06.718078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.718204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.718237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.718428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.718460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.718616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.718651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.718822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.718858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.719079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.719111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.719298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.719330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.719513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.719545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.719736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.719768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.719953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.719990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.720195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.720232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.720421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.720453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.720615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.720651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.720813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.720846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.721013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.721045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.721215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.721248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.721425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.721461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.721637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.721669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.721857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.721892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.722050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.722092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.722274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.722306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.722483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.722526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.722730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.722762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.722953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.722985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.723116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.723149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.723316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.723366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.723540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.723573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.723778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.723813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.724022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.724066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.724223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.724256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.724389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.495 [2024-07-26 06:28:06.724440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.495 qpair failed and we were unable to recover it. 00:35:55.495 [2024-07-26 06:28:06.724578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.724614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.724761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.724799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.724954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.725005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.725197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.725230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.725424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.725456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.725626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.725658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.725855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.725891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.726079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.726116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.726243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.726276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.726437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.726488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.726672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.726704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.726845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.726881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.727057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.727099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.727281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.727314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.727493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.727525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.727730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.727762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.727891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.727922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.728066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.728117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.728267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.728304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.728482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.728515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.728675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.728707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.728884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.728920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.729072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.729105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.729242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.729292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.729438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.729474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.729633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.729665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.729804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.729837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.730005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.730051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.730253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.730285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.730448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.730480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.496 qpair failed and we were unable to recover it. 00:35:55.496 [2024-07-26 06:28:06.730658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.496 [2024-07-26 06:28:06.730694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.730868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.730900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.731077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.731125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.731304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.731340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.731521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.731553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.731736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.731772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.731938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.731974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.732128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.732161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.732328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.732364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.732508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.732545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.732744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.732776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.732931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.732982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.733115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.733151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.733343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.733375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.733569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.733605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.733763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.733796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.733991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.734023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.734193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.734236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.734414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.734450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.734601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.734634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.734761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.734793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.734932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.734964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.735146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.735179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.735357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.735393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.735598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.735633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.735817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.735849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.736027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.736068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.736236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.736272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.736453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.736486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.736686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.736721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.736898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.497 [2024-07-26 06:28:06.736929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.497 qpair failed and we were unable to recover it. 00:35:55.497 [2024-07-26 06:28:06.737104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.737137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.737320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.737355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.737506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.737542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.737725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.737757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.737946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.737978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.738185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.738218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.738375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.738408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.738615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.738651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.738790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.738826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.738996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.739029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.739180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.739212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.739340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.739373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.739502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.739534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.739716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.739752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.739921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.739957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.740108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.740140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.740302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.740335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.740493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.740526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.740684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.740716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.740867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.740902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.741101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.741138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.741319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.741352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.741527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.741563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.741704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.741740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.741921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.741954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.742122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.742158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.742309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.742349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.742559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.742591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.742746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.742782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.742981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.498 [2024-07-26 06:28:06.743018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.498 qpair failed and we were unable to recover it. 00:35:55.498 [2024-07-26 06:28:06.743167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.743209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.499 qpair failed and we were unable to recover it. 00:35:55.499 [2024-07-26 06:28:06.743336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.743385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.499 qpair failed and we were unable to recover it. 00:35:55.499 [2024-07-26 06:28:06.743533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.743570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.499 qpair failed and we were unable to recover it. 00:35:55.499 [2024-07-26 06:28:06.743750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.743783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.499 qpair failed and we were unable to recover it. 00:35:55.499 [2024-07-26 06:28:06.743975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.744011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.499 qpair failed and we were unable to recover it. 00:35:55.499 [2024-07-26 06:28:06.744180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.744213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.499 qpair failed and we were unable to recover it. 00:35:55.499 [2024-07-26 06:28:06.744341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.744373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.499 qpair failed and we were unable to recover it. 00:35:55.499 [2024-07-26 06:28:06.744575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.744611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.499 qpair failed and we were unable to recover it. 00:35:55.499 [2024-07-26 06:28:06.744783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.744819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.499 qpair failed and we were unable to recover it. 00:35:55.499 [2024-07-26 06:28:06.744971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.745004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.499 qpair failed and we were unable to recover it. 00:35:55.499 [2024-07-26 06:28:06.745181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.745213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.499 qpair failed and we were unable to recover it. 00:35:55.499 [2024-07-26 06:28:06.745337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.745369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.499 qpair failed and we were unable to recover it. 00:35:55.499 [2024-07-26 06:28:06.745523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.745555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.499 qpair failed and we were unable to recover it. 00:35:55.499 [2024-07-26 06:28:06.745728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.745764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.499 qpair failed and we were unable to recover it. 00:35:55.499 [2024-07-26 06:28:06.745934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.745970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.499 qpair failed and we were unable to recover it. 00:35:55.499 [2024-07-26 06:28:06.746134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.746167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.499 qpair failed and we were unable to recover it. 00:35:55.499 [2024-07-26 06:28:06.746346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.746382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.499 qpair failed and we were unable to recover it. 00:35:55.499 [2024-07-26 06:28:06.746555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.746591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.499 qpair failed and we were unable to recover it. 00:35:55.499 [2024-07-26 06:28:06.746744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.746776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.499 qpair failed and we were unable to recover it. 00:35:55.499 [2024-07-26 06:28:06.746933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.746965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.499 qpair failed and we were unable to recover it. 00:35:55.499 [2024-07-26 06:28:06.747131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.747165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.499 qpair failed and we were unable to recover it. 00:35:55.499 [2024-07-26 06:28:06.747295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.747328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.499 qpair failed and we were unable to recover it. 00:35:55.499 [2024-07-26 06:28:06.747478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.747516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.499 qpair failed and we were unable to recover it. 00:35:55.499 [2024-07-26 06:28:06.747694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.747730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.499 qpair failed and we were unable to recover it. 00:35:55.499 [2024-07-26 06:28:06.747909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.747941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.499 qpair failed and we were unable to recover it. 00:35:55.499 [2024-07-26 06:28:06.748116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.748152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.499 qpair failed and we were unable to recover it. 00:35:55.499 [2024-07-26 06:28:06.748337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.499 [2024-07-26 06:28:06.748369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.500 qpair failed and we were unable to recover it. 00:35:55.500 [2024-07-26 06:28:06.748504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.500 [2024-07-26 06:28:06.748536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.500 qpair failed and we were unable to recover it. 00:35:55.500 [2024-07-26 06:28:06.748723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.500 [2024-07-26 06:28:06.748755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.500 qpair failed and we were unable to recover it. 00:35:55.500 [2024-07-26 06:28:06.748966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.500 [2024-07-26 06:28:06.749006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.500 qpair failed and we were unable to recover it. 00:35:55.500 [2024-07-26 06:28:06.749183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.500 [2024-07-26 06:28:06.749216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.500 qpair failed and we were unable to recover it. 00:35:55.500 [2024-07-26 06:28:06.749376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.500 [2024-07-26 06:28:06.749428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.500 qpair failed and we were unable to recover it. 00:35:55.500 [2024-07-26 06:28:06.749629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.500 [2024-07-26 06:28:06.749664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.500 qpair failed and we were unable to recover it. 00:35:55.500 [2024-07-26 06:28:06.749813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.500 [2024-07-26 06:28:06.749845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.500 qpair failed and we were unable to recover it. 00:35:55.500 [2024-07-26 06:28:06.750037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.500 [2024-07-26 06:28:06.750082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.500 qpair failed and we were unable to recover it. 00:35:55.500 [2024-07-26 06:28:06.750251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.500 [2024-07-26 06:28:06.750287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.500 qpair failed and we were unable to recover it. 00:35:55.500 [2024-07-26 06:28:06.750443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.500 [2024-07-26 06:28:06.750476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.500 qpair failed and we were unable to recover it. 00:35:55.500 [2024-07-26 06:28:06.750665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.500 [2024-07-26 06:28:06.750698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.500 qpair failed and we were unable to recover it. 00:35:55.500 [2024-07-26 06:28:06.750864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.500 [2024-07-26 06:28:06.750913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.500 qpair failed and we were unable to recover it. 00:35:55.500 [2024-07-26 06:28:06.751076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.500 [2024-07-26 06:28:06.751117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.500 qpair failed and we were unable to recover it. 00:35:55.788 [2024-07-26 06:28:06.751249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.788 [2024-07-26 06:28:06.751282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.788 qpair failed and we were unable to recover it. 00:35:55.788 [2024-07-26 06:28:06.751462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.788 [2024-07-26 06:28:06.751495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.788 qpair failed and we were unable to recover it. 00:35:55.788 [2024-07-26 06:28:06.751668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.788 [2024-07-26 06:28:06.751702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.788 qpair failed and we were unable to recover it. 00:35:55.788 [2024-07-26 06:28:06.751857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.788 [2024-07-26 06:28:06.751919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.788 qpair failed and we were unable to recover it. 00:35:55.788 [2024-07-26 06:28:06.752088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.788 [2024-07-26 06:28:06.752124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.788 qpair failed and we were unable to recover it. 00:35:55.788 [2024-07-26 06:28:06.752291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.788 [2024-07-26 06:28:06.752324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.788 qpair failed and we were unable to recover it. 00:35:55.788 [2024-07-26 06:28:06.752478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.788 [2024-07-26 06:28:06.752528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.788 qpair failed and we were unable to recover it. 00:35:55.788 [2024-07-26 06:28:06.752706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.788 [2024-07-26 06:28:06.752742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.788 qpair failed and we were unable to recover it. 00:35:55.788 [2024-07-26 06:28:06.752889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.788 [2024-07-26 06:28:06.752922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.788 qpair failed and we were unable to recover it. 00:35:55.788 [2024-07-26 06:28:06.753078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.788 [2024-07-26 06:28:06.753127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.788 qpair failed and we were unable to recover it. 00:35:55.788 [2024-07-26 06:28:06.753310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.788 [2024-07-26 06:28:06.753346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.788 qpair failed and we were unable to recover it. 00:35:55.788 [2024-07-26 06:28:06.753525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.788 [2024-07-26 06:28:06.753558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.788 qpair failed and we were unable to recover it. 00:35:55.788 [2024-07-26 06:28:06.753735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.788 [2024-07-26 06:28:06.753771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.788 qpair failed and we were unable to recover it. 00:35:55.788 [2024-07-26 06:28:06.753960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.788 [2024-07-26 06:28:06.753992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.788 qpair failed and we were unable to recover it. 00:35:55.788 [2024-07-26 06:28:06.754125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.788 [2024-07-26 06:28:06.754158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.788 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.754362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.754397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.754572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.754608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.754750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.754782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.754912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.754945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.755146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.755183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.755368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.755400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.755537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.755569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.755726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.755774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.755983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.756020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.756191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.756240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.756388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.756425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.756575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.756608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.756786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.756822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.756989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.757024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.757178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.757210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.757373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.757405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.757560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.757593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.757727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.757759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.757916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.757948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.758071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.758103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.758325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.758357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.758530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.758565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.758743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.758780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.758934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.758967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.759168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.759205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.759386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.759421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.759574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.759607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.759809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.759845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.759991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.789 [2024-07-26 06:28:06.760027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.789 qpair failed and we were unable to recover it. 00:35:55.789 [2024-07-26 06:28:06.760188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.760221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.760391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.760427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.760574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.760610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.760757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.760789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.760917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.760965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.761117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.761153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.761322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.761355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.761513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.761545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.761711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.761744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.761928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.761961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.762094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.762127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.762255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.762288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.762476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.762508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.762689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.762725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.762940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.762972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.763133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.763166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.763318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.763354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.763494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.763530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.763717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.763749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.763929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.763969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.764129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.764162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.764343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.764375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.764555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.764591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.764747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.764780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.764940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.764972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.765111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.765143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.765339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.765375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.790 [2024-07-26 06:28:06.765549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.790 [2024-07-26 06:28:06.765581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.790 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.765758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.765794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.765945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.765978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.766156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.766189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.766359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.766396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.766571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.766606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.766790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.766822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.766945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.766978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.767112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.767144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.767277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.767310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.767469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.767502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.767638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.767686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.767851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.767884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.768039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.768097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.768298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.768334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.768522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.768554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.768694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.768726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.768882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.768923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.769078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.769111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.769318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.769353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.769526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.769563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.769769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.769801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.769920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.769952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.770085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.770118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.770282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.770314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.770497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.770533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.770680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.770716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.770905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.770936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.771111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.771148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.771317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.771354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.771530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.771562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.771717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.771750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.791 qpair failed and we were unable to recover it. 00:35:55.791 [2024-07-26 06:28:06.771878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.791 [2024-07-26 06:28:06.771914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.772045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.772083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.772209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.772246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.772393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.772425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.772580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.772612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.772772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.772805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.773002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.773037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.773205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.773238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.773372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.773404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.773587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.773639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.773841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.773875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.774097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.774138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.774347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.774382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.774590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.774623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.774794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.774829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.774990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.775027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.775206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.775239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.775440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.775477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.775640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.775682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.775851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.775884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.776037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.776080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.776219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.776254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.776455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.776501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.776659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.776697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.776881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.776921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.777111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.777145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.777299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.777344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.777533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.777571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.777765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.777799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.777969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.778006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.778173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.778206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.778347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.778380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.778511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.778552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.778698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.778738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.778879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.778912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.792 qpair failed and we were unable to recover it. 00:35:55.792 [2024-07-26 06:28:06.779079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.792 [2024-07-26 06:28:06.779112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.779252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.779285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.779446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.779479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.779646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.779700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.779894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.779933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.780089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.780129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.780294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.780328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.780468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.780501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.780692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.780725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.780887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.780921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.781082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.781120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.781278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.781311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.781553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.781608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.781787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.781819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.782015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.782052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.782197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.782229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.782389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.782426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.782616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.782649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.782944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.783002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.783178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.783215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.783386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.783419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.783582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.783615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.783781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.783817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.783950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.783982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.784141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.784174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.784334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.784371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.784534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.784570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.784706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.784739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.784929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.784962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.785108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.785142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.793 [2024-07-26 06:28:06.785317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.793 [2024-07-26 06:28:06.785353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.793 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.785553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.785589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.785795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.785828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.785960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.785993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.786162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.786195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.786358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.786391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.786569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.786607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.786790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.786827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.787012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.787046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.787184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.787217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.787378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.787410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.787575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.787609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.787804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.787840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.788015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.788070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.788230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.788264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.788389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.788429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.788595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.788629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.788812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.788845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.789017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.789053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.789206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.789244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.789451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.789484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.789645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.789682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.789828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.789862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.790024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.790076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.790254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.790288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.790447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.790484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.790631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.790669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.790813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.790851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.791017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.791050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.791207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.791244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.791407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.791445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.791649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.794 [2024-07-26 06:28:06.791685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.794 qpair failed and we were unable to recover it. 00:35:55.794 [2024-07-26 06:28:06.791841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.791878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.792044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.792086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.792260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.792292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.792478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.792511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.792644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.792681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.792893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.792930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.793117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.793151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.793315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.793348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.793509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.793542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.793705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.793739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.793932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.793990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.794145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.794182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.794362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.794401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.794537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.794569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.794753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.794787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.794921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.794971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.795155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.795192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.795341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.795377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.795527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.795560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.795689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.795725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.795889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.795923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.796118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.796152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.796291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.796323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.796452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.796489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.796673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.796706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.796888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.796925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.797100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.797137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.797315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.797349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.797510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.797543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.797702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.797736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.797892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.797925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.798076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.798112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.798243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.798276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.798452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.798486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.798642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.798676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.798820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.795 [2024-07-26 06:28:06.798853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.795 qpair failed and we were unable to recover it. 00:35:55.795 [2024-07-26 06:28:06.798996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.799043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.799246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.799282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.799463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.799500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.799659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.799692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.799853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.799891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.800051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.800091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.800227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.800260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.800423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.800456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.800612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.800661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.800819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.800852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.801014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.801047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.801212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.801246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.801436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.801469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.801616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.801656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.801796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.801830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.801965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.801998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.802167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.802206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.802375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.802407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.802569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.802602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.802756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.802795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.802931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.802963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.803123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.803157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.803319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.803353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.803513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.803547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.803705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.803737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.803916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.803953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.804125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.804162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.804352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.804390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.804517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.804553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.804720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.804756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.804919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.804951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.805143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.805176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.805366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.805403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.805565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.805598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.805790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.805823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.806036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.796 [2024-07-26 06:28:06.806078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.796 qpair failed and we were unable to recover it. 00:35:55.796 [2024-07-26 06:28:06.806241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.806328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.806488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.806521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.806708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.806740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.806897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.806930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.807107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.807160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.807375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.807408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.807572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.807606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.807745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.807778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.807916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.807949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.808094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.808128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.808288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.808322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.808479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.808515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.808700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.808736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.808882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.808917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.809081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.809116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.809252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.809295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.809440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.809475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.809614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.809647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.809811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.809847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.809993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.810035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.810244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.810283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.810492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.810528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.810716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.810750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.810914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.810948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.811109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.811146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.811324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.811360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.811542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.811579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.811748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.811782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.811928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.811964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.812138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.812177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.812358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.812392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.812550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.812587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.812726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.812762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.812921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.797 [2024-07-26 06:28:06.812958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.797 qpair failed and we were unable to recover it. 00:35:55.797 [2024-07-26 06:28:06.813112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.813152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.813305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.813341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.813504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.813538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.813668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.813700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.813863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.813896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.814029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.814077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.814255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.814288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.814447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.814481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.814636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.814670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.814802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.814836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.814995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.815029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.815193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.815240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.815392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.815427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.815561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.815593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.815879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.815939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.816094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.816131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.816274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.816306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.816469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.816503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.816665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.816702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.816903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.816940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.817120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.817160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.817331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.817365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.817527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.817560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.817701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.817738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.817931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.817995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.818143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.818179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.818312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.818348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.818529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.818568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.818741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.818779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.818946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.818982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.819151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.819185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.798 qpair failed and we were unable to recover it. 00:35:55.798 [2024-07-26 06:28:06.819343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.798 [2024-07-26 06:28:06.819384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.819541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.819573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.819763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.819800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.819981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.820018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.820185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.820232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.820428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.820467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.820650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.820695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.820851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.820890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.821100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.821134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.821271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.821304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.821480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.821518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.821729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.821788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.821971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.822008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.822204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.822238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.822369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.822402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.822581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.822614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.822765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.822802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.823002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.823042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.823221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.823254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.823423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.823471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.823668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.823708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.823886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.823923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.824127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.824162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.824326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.824370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.824524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.824568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.824750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.824786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.825003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.825040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.825205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.825246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.825415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.825451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.825677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.825737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.825885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.825923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.799 qpair failed and we were unable to recover it. 00:35:55.799 [2024-07-26 06:28:06.826132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.799 [2024-07-26 06:28:06.826168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.826335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.826369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.826555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.826602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.826850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.826907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.827086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.827120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.827251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.827293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.827480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.827512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.827711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.827764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.827936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.827978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.828149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.828182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.828342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.828377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.828509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.828556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.828729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.828765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.828984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.829024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.829189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.829223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.829353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.829391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.829599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.829636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.829795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.829832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.829998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.830034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.830208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.830242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.830386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.830419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.830588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.830624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.830792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.830829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.830998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.831035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.831233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.831267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.831431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.831464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.831656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.831689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.831846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.831894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.832052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.832093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.832287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.832320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.832447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.832480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.832665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.832698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.832856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.832889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.833051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.833109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.833254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.833287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.833419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.833453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.833626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.800 [2024-07-26 06:28:06.833659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.800 qpair failed and we were unable to recover it. 00:35:55.800 [2024-07-26 06:28:06.833818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.833864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.834017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.834050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.834263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.834299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.834456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.834489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.834650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.834684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.834824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.834860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.835044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.835103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.835245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.835285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.835462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.835496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.835664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.835697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.835825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.835861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.836032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.836074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.836215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.836249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.836420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.836460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.836601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.836633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.836789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.836822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.836959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.836991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.837135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.837169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.837352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.837390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.837550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.837590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.837729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.837761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.837957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.837991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.838234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.838271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.838452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.838488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.838664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.838696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.838849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.838883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.839047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.839115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.839303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.839336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.839476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.839514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.839653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.839694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.839834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.839866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.840027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.840071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.840215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.840248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.840386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.840419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.840594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.840629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.840810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.840858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.801 [2024-07-26 06:28:06.841025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.801 [2024-07-26 06:28:06.841074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.801 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.841267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.841300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.841462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.841497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.841659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.841698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.841867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.841900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.842072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.842105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.842260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.842297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.842431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.842464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.842599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.842633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.842802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.842836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.842998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.843031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.843176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.843211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.843375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.843410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.843557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.843591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.843723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.843764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.843929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.843961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.844142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.844176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.844311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.844353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.844515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.844547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.844705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.844738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.844888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.844936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.845096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.845133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.845271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.845310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.845475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.845509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.845681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.845713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.845852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.845887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.846054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.846094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.846231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.846268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.846455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.846489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.846682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.846716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.846918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.846952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.847135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.847169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.847301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.847335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.847524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.847557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.847687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.802 [2024-07-26 06:28:06.847720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.802 qpair failed and we were unable to recover it. 00:35:55.802 [2024-07-26 06:28:06.847865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.847905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.848057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.848106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.848285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.848328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.848488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.848521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.848678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.848710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.848842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.848875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.849035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.849075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.849217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.849251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.849413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.849447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.849687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.849749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.849943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.849975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.850134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.850168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.850347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.850384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.850573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.850606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.850738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.850772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.850955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.850988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.851191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.851224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.851402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.851439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.851616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.851657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.851818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.851850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.852007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.852039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.852180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.852213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.852377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.852410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.852597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.852633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.852933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.852991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.853172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.853205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.853361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.853395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.853529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.853578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.853765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.853797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.853970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.803 [2024-07-26 06:28:06.854006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.803 qpair failed and we were unable to recover it. 00:35:55.803 [2024-07-26 06:28:06.854199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.854232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.854386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.854419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.854554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.854586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.854713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.854747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.854912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.854945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.855121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.855158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.855346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.855378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.855538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.855570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.855773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.855809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.855983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.856018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.856215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.856248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.856410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.856442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.856601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.856634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.856761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.856794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.856944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.856976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.857155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.857191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.857337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.857369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.857611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.857646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.857868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.857904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.858146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.858180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.858338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.858373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.858554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.858587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.858723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.858757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.858938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.858974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.859188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.859240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.859449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.859485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.859664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.859701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.859889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.859923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.860108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.860141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.860326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.860363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.860613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.860647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.860808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.860840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.861022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.861057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.861270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.861305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.861485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.804 [2024-07-26 06:28:06.861517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.804 qpair failed and we were unable to recover it. 00:35:55.804 [2024-07-26 06:28:06.861654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.861686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.861817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.861849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.862039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.862080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.862268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.862304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.862441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.862477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.862633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.862665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.862816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.862848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.863081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.863129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.863298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.863333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.863509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.863549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.863735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.863772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.863923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.863956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.864137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.864175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.864366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.864404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.864582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.864614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.864791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.864826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.865005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.865041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.865247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.865279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.865426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.865461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.865626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.865686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.865875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.865907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.866108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.866144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.866347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.866400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.866570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.866605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.866765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.866798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.866958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.866994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.867206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.867240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.867404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.867437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.867617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.867655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.867837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.867873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.868065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.868101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.868272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.868320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.868500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.868533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.868691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.868723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.868886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.868918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.805 [2024-07-26 06:28:06.869111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.805 [2024-07-26 06:28:06.869143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.805 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.869296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.869332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.869598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.869653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.869811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.869843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.870024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.870056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.870220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.870252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.870414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.870447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.870651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.870687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.870862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.870897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.871073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.871106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.871232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.871288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.871533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.871590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.871788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.871873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.872074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.872110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.872306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.872358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.872553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.872589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.872778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.872815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.872986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.873023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.873206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.873239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.873384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.873420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.873655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.873711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.873912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.873945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.874133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.874182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.874371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.874404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.874562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.874594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.874754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.874791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.874937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.874973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.875177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.875210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.875372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.875404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.875540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.875594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.875786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.875819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.875949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.875982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.876117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.876152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.806 [2024-07-26 06:28:06.876369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.806 [2024-07-26 06:28:06.876402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.806 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.876578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.876620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.876828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.876860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.877044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.877083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.877263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.877300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.877440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.877478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.877629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.877662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.877840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.877878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.878079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.878116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.878318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.878351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.878504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.878537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.878693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.878744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.878924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.878957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.879119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.879152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.879338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.879371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.879570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.879603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.879785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.879821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.879972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.880008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.880197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.880230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.880381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.880417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.880560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.880597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.880742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.880775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.880904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.880953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.881132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.881171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.881374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.881407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.881585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.881621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.881823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.881859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.882016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.882047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.882195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.882228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.882429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.882467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.882670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.882703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.882880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.882916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.883086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.883123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.883333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.883365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.883512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.883549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.883741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.807 [2024-07-26 06:28:06.883801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-07-26 06:28:06.883967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.884000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.884156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.884206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.884354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.884391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.884563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.884595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.884747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.884784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.884955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.884995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.885152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.885185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.885339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.885389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.885626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.885691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.885897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.885929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.886085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.886119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.886307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.886347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.886531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.886564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.886718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.886769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.886955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.886992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.887184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.887217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.887349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.887383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.887587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.887623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.887799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.887831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.888037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.888090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.888300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.888333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.888464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.888496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.888701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.888737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.888912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.888949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.889129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.889162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.889335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.889372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.889551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.889589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.889738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.889770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.889907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.889939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-07-26 06:28:06.890096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.808 [2024-07-26 06:28:06.890129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.890284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.890317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.890444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.890476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.890686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.890750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.890927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.890960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.891148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.891185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.891484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.891544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.891728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.891761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.891932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.891968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.892120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.892158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.892314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.892347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.892483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.892515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.892730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.892792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.892974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.893007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.893176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.893209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.893359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.893397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.893578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.893617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.893796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.893832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.894005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.894042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.894232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.894265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.894500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.894538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.894822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.894881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.895029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.895067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.895227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.895277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.895474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.895530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.895739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.895771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.895910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.895943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.896155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.896191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.896346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.896379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.896555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.896591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.896846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.896900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-07-26 06:28:06.897078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.809 [2024-07-26 06:28:06.897111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.897241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.897292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.897593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.897653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.897808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.897850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.898030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.898074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.898254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.898290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.898470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.898502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.898637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.898669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.898852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.898884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.899057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.899096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.899249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.899281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.899543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.899578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.899789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.899821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.899997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.900032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.900222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.900260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.900418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.900450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.900624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.900659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.900833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.900869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.901027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.901069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.901216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.901252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.901426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.901462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.901645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.901677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.901836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.901868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.902000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.902048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.902303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.902336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.902548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.902584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.902798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.902857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.903032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.903071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.903246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.903282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.903463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.903496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.903629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.903661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.903838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.903874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.904043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.904094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.904252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.904284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.904464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.904496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.904692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.904727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.810 qpair failed and we were unable to recover it. 00:35:55.810 [2024-07-26 06:28:06.904879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.810 [2024-07-26 06:28:06.904911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.905075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.905125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.905305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.905336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.905500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.905533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.905737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.905772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.905911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.905946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.906125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.906157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.906361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.906396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.906578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.906614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.906796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.906828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.906967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.907003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.907203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.907239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.907394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.907426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.907561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.907595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.907780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.907815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.907973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.908005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.908138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.908171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.908328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.908380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.908556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.908587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.908722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.908754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.908950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.908985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.909132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.909166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.909324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.909357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.909512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.909545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.909671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.909703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.909881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.909917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.910091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.910128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.910303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.910335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.910510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.910546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.910687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.910728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.910919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.910952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.911136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.911178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.911345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.911377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.911593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.911625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.911774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.811 [2024-07-26 06:28:06.911811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.811 qpair failed and we were unable to recover it. 00:35:55.811 [2024-07-26 06:28:06.912012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.912044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.912213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.912245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.912424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.912460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.912629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.912664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.912856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.912888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.913100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.913133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.913293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.913325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.913453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.913485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.913692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.913729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.913902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.913938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.914116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.914150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.914309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.914341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.914474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.914506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.914689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.914721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.914902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.914939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.915122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.915158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.915342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.915375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.915522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.915557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.915693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.915730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.915945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.915977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.916127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.916164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.916350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.916386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.916546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.916578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.916760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.916793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.916972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.917008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.917176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.917208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.917329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.917379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.917552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.917589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.917764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.917797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.917932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.917964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.918089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.918122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.918280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.918313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.918444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.918495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.918678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.918714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.918894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.918931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.812 qpair failed and we were unable to recover it. 00:35:55.812 [2024-07-26 06:28:06.919108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.812 [2024-07-26 06:28:06.919146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.919309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.919344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.919550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.919582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.919755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.919791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.919933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.919969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.920157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.920190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.920398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.920434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.920610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.920647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.920858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.920891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.921096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.921133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.921329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.921365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.921548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.921580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.921754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.921790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.921994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.922031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.922192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.922225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.922384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.922433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.922587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.922623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.922800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.922832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.923004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.923040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.923239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.923276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.923446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.923479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.923636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.923669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.923841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.923878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.924066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.924099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.924245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.924281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.924452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.924498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.924677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.924710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.924888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.924924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.925124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.925161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.925312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.925345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.925554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.925590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.925733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.925768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.813 qpair failed and we were unable to recover it. 00:35:55.813 [2024-07-26 06:28:06.925946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.813 [2024-07-26 06:28:06.925978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.926153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.926190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.926360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.926396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.926570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.926602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.926770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.926806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.926969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.927005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.927162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.927195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.927326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.927362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.927521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.927553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.927723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.927755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.927917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.927949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.928136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.928169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.928361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.928393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.928566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.928602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.928749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.928785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.928942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.928974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.929124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.929173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.929370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.929436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.929656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.929691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.929852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.929885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.930024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.930068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.930278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.930312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.930501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.930538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.930697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.930732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.930893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.930925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.931135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.931172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.931315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.931354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.931533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.931566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.931719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.931755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.931900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.931939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.932149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.932182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.814 [2024-07-26 06:28:06.932360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.814 [2024-07-26 06:28:06.932395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.814 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.932546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.932583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.932753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.932785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.932962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.932998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.933181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.933217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.933404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.933437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.933603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.933639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.933838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.933870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.934052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.934092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.934236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.934272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.934490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.934547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.934705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.934737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.934864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.934897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.935108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.935144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.935353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.935385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.935538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.935575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.935781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.935843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.936000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.936032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.936203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.936237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.936415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.936477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.936626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.936658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.936833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.936870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.937039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.937082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.937268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.937301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.937503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.937539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.937781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.937840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.938013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.938045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.938176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.938228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.938429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.938482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.938679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.938715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.938885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.938919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.939119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.939168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.939326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.939359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.939490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.939524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.939791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.939847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.940000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.940034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.940257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.815 [2024-07-26 06:28:06.940294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.815 qpair failed and we were unable to recover it. 00:35:55.815 [2024-07-26 06:28:06.940472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.940509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.940714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.940747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.940926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.940962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.941144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.941183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.941368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.941400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.941540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.941573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.941707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.941741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.941875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.941909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.942036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.942075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.942294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.942330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.942535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.942568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.942739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.942776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.942976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.943012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.943207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.943241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.943416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.943453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.943749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.943811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.944018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.944051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.944242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.944280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.944452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.944488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.944669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.944706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.944880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.944917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.945117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.945155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.945318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.945351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.945488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.945521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.945728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.945764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.945937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.945970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.946131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.946165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.946344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.946382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.946540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.946573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.946744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.946779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.946977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.947012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.947180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.947213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.947347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.947398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.947582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.947619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.947802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.947844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.948006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.948039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.816 [2024-07-26 06:28:06.948251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.816 [2024-07-26 06:28:06.948304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.816 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.948491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.948533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.948686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.948722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.948891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.948927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.949137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.949171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.949322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.949371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.949529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.949579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.949738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.949772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.949945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.949982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.950166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.950204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.950363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.950395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.950571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.950607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.950875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.950935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.951144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.951176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.951383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.951419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.951604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.951636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.951796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.951828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.952003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.952039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.952223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.952259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.952421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.952453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.952653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.952689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.952891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.952923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.953081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.953114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.953324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.953364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.953602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.953638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.953844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.953876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.954025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.954066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.954210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.954246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.954402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.954435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.954590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.954638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.954801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.954835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.955001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.955033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.955196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.955229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.955360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.817 [2024-07-26 06:28:06.955394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.817 qpair failed and we were unable to recover it. 00:35:55.817 [2024-07-26 06:28:06.955549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.955582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.955762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.955795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.955973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.956009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.956174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.956207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.956378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.956414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.956714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.956769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.956982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.957015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.957184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.957217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.957425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.957478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.957672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.957707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.957907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.957944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.958132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.958171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.958342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.958375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.958576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.958613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.958811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.958844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.958979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.959012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.959178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.959224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.959371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.959409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.959591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.959623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.959802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.959838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.960057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.960097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.960229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.960272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.960435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.960486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.960669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.960706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.960857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.960889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.961028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.961084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.961291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.961328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.961505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.961537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.961776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.961839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.962013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.962049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.962247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.818 [2024-07-26 06:28:06.962280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.818 qpair failed and we were unable to recover it. 00:35:55.818 [2024-07-26 06:28:06.962416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.962448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.962608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.962641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.962823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.962855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.963038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.963089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.963239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.963275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.963457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.963489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.963662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.963698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.963912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.963944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.964075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.964108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.964264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.964297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.964502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.964538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.964720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.964752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.964963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.964999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.965216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.965249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.965411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.965443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.965706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.965740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.965900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.965950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.966123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.966157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.966335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.966371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.966534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.966571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.966750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.966782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.966985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.967023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.967219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.967252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.967419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.967452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.967634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.967672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.967883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.967924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.968092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.968125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.968288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.968339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.968509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.968546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.968727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.968761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.968966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.969003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.969165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.969198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.969334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.969367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.819 [2024-07-26 06:28:06.969528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.819 [2024-07-26 06:28:06.969580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.819 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.969728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.969764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.969941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.969973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.970177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.970214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.970399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.970432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.970591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.970623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.970762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.970796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.970957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.971007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.971185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.971218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.971376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.971409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.971591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.971628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.971778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.971811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.971952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.972005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.972173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.972206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.972339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.972371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.972528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.972579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.972770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.972828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.973028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.973068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.973193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.973226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.973426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.973478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.973703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.973737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.973892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.973930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.974083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.974134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.974299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.974331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.974492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.974525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.974753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.974810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.975002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.975035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.975188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.975221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.975383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.975416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.975549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.975582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.975714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.975747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.975892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.975926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.976074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.976112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.976268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.976310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.976568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.820 [2024-07-26 06:28:06.976624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.820 qpair failed and we were unable to recover it. 00:35:55.820 [2024-07-26 06:28:06.976832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.976864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.977043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.977084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.977232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.977264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.977419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.977452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.977655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.977716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.977923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.977962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.978168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.978202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.978361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.978394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.978551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.978584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.978742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.978775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.978955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.978992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.979180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.979214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.979398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.979431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.979653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.979686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.979814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.979848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.980011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.980043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.980211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.980244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.980543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.980601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.980816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.980848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.981004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.981039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.981201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.981235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.981396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.981429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.981583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.981635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.981868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.981924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.982119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.982152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.982306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.982356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.982574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.982629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.982809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.982842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.983018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.983054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.983244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.983276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.983438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.983471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.983665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.983701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.983858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.821 [2024-07-26 06:28:06.983890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.821 qpair failed and we were unable to recover it. 00:35:55.821 [2024-07-26 06:28:06.984023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.984056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.984220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.984253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.984518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.984570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.984767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.984802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.984941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.985001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.985168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.985202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.985374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.985407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.985572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.985615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.985825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.985861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.986041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.986083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.986224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.986256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.986437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.986475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.986627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.986659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.986810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.986862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.987032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.987078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.987256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.987290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.987468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.987505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.987691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.987728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.987891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.987923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.988083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.988117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.988344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.988397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.988580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.988615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.988797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.988834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.989006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.989043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.989201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.989234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.989364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.989397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.989693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.989750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.989924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.989957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.990157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.990194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.990368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.990406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.990610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.990642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.990834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.822 [2024-07-26 06:28:06.990871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.822 qpair failed and we were unable to recover it. 00:35:55.822 [2024-07-26 06:28:06.991026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.991078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.991266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.991299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.991456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.991488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.991773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.991832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.991990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.992022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.992190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.992222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.992462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.992516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.992730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.992762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.992960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.992992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.993163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.993198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.993353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.993386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.993557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.993594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.993829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.993890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.994054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.994096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.994293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.994343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.994545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.994580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.994715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.994748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.994909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.994943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.995133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.995170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.995387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.995419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.995600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.995636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.995881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.995940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.996124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.996157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.996329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.996365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.996572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.996632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.996807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.996839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.997022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.997063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.997221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.997274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.997463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.997500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.997680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.997718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.997922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.997958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.998144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.998178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.998384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.998420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.823 [2024-07-26 06:28:06.998663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.823 [2024-07-26 06:28:06.998719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.823 qpair failed and we were unable to recover it. 00:35:55.824 [2024-07-26 06:28:06.998937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.824 [2024-07-26 06:28:06.998969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.824 qpair failed and we were unable to recover it. 00:35:55.824 [2024-07-26 06:28:06.999110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.824 [2024-07-26 06:28:06.999144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.824 qpair failed and we were unable to recover it. 00:35:55.824 [2024-07-26 06:28:06.999333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.824 [2024-07-26 06:28:06.999386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.824 qpair failed and we were unable to recover it. 00:35:55.824 [2024-07-26 06:28:06.999572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.824 [2024-07-26 06:28:06.999604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.824 qpair failed and we were unable to recover it. 00:35:55.824 [2024-07-26 06:28:06.999818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.824 [2024-07-26 06:28:06.999853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.824 qpair failed and we were unable to recover it. 00:35:55.824 [2024-07-26 06:28:07.000068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.824 [2024-07-26 06:28:07.000106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.824 qpair failed and we were unable to recover it. 00:35:55.824 [2024-07-26 06:28:07.000277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.824 [2024-07-26 06:28:07.000310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.824 qpair failed and we were unable to recover it. 00:35:55.824 [2024-07-26 06:28:07.000442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.824 [2024-07-26 06:28:07.000474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.824 qpair failed and we were unable to recover it. 00:35:55.824 [2024-07-26 06:28:07.000606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.824 [2024-07-26 06:28:07.000639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.824 qpair failed and we were unable to recover it. 00:35:55.824 [2024-07-26 06:28:07.000815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.824 [2024-07-26 06:28:07.000848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.824 qpair failed and we were unable to recover it. 00:35:55.824 [2024-07-26 06:28:07.001028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.824 [2024-07-26 06:28:07.001066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.824 qpair failed and we were unable to recover it. 00:35:55.824 [2024-07-26 06:28:07.001189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.824 [2024-07-26 06:28:07.001222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.824 qpair failed and we were unable to recover it. 00:35:55.824 [2024-07-26 06:28:07.001350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.824 [2024-07-26 06:28:07.001382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.824 qpair failed and we were unable to recover it. 00:35:55.824 [2024-07-26 06:28:07.001566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.824 [2024-07-26 06:28:07.001618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.824 qpair failed and we were unable to recover it. 00:35:55.824 [2024-07-26 06:28:07.001790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.824 [2024-07-26 06:28:07.001826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.824 qpair failed and we were unable to recover it. 00:35:55.824 [2024-07-26 06:28:07.002035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.824 [2024-07-26 06:28:07.002074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.824 qpair failed and we were unable to recover it. 00:35:55.824 [2024-07-26 06:28:07.002277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.824 [2024-07-26 06:28:07.002310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.824 qpair failed and we were unable to recover it. 00:35:55.824 [2024-07-26 06:28:07.002470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.824 [2024-07-26 06:28:07.002522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.824 qpair failed and we were unable to recover it. 00:35:55.824 [2024-07-26 06:28:07.002673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.824 [2024-07-26 06:28:07.002711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.824 qpair failed and we were unable to recover it. 00:35:55.824 [2024-07-26 06:28:07.002918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.824 [2024-07-26 06:28:07.002954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.824 qpair failed and we were unable to recover it. 00:35:55.824 [2024-07-26 06:28:07.003160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.824 [2024-07-26 06:28:07.003196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.824 qpair failed and we were unable to recover it. 00:35:55.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 312855 Killed "${NVMF_APP[@]}" "$@" 00:35:55.824 [2024-07-26 06:28:07.003356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.824 [2024-07-26 06:28:07.003389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.824 qpair failed and we were unable to recover it. 00:35:55.824 [2024-07-26 06:28:07.003570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.824 [2024-07-26 06:28:07.003607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.824 qpair failed and we were unable to recover it. 00:35:55.824 06:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:35:55.824 [2024-07-26 06:28:07.003868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.824 [2024-07-26 06:28:07.003927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.824 06:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:55.824 qpair failed and we were unable to recover it. 00:35:55.824 06:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:55.824 [2024-07-26 06:28:07.004118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.824 [2024-07-26 06:28:07.004150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 06:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:55.825 06:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:55.825 [2024-07-26 06:28:07.004375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 [2024-07-26 06:28:07.004408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 [2024-07-26 06:28:07.004540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 [2024-07-26 06:28:07.004582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 [2024-07-26 06:28:07.004708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 [2024-07-26 06:28:07.004742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 [2024-07-26 06:28:07.004899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 [2024-07-26 06:28:07.004932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 [2024-07-26 06:28:07.005070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 [2024-07-26 06:28:07.005121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 [2024-07-26 06:28:07.005309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 [2024-07-26 06:28:07.005349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 [2024-07-26 06:28:07.005494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 [2024-07-26 06:28:07.005530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 [2024-07-26 06:28:07.005674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 [2024-07-26 06:28:07.005710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 [2024-07-26 06:28:07.005884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 [2024-07-26 06:28:07.005916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 [2024-07-26 06:28:07.006073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 [2024-07-26 06:28:07.006107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 [2024-07-26 06:28:07.006302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 [2024-07-26 06:28:07.006355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 [2024-07-26 06:28:07.006561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 [2024-07-26 06:28:07.006596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 [2024-07-26 06:28:07.006765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 [2024-07-26 06:28:07.006798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 [2024-07-26 06:28:07.006992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 [2024-07-26 06:28:07.007029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 [2024-07-26 06:28:07.007190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 [2024-07-26 06:28:07.007224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 [2024-07-26 06:28:07.007401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 [2024-07-26 06:28:07.007437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 [2024-07-26 06:28:07.007653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 [2024-07-26 06:28:07.007712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 [2024-07-26 06:28:07.007872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 [2024-07-26 06:28:07.007909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 [2024-07-26 06:28:07.008041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 [2024-07-26 06:28:07.008106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe8 06:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=313413 00:35:55.825 0 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 06:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:55.825 06:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 313413 00:35:55.825 [2024-07-26 06:28:07.008300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 [2024-07-26 06:28:07.008336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 06:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 313413 ']' 00:35:55.825 [2024-07-26 06:28:07.008495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 [2024-07-26 06:28:07.008527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 06:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:55.825 [2024-07-26 06:28:07.008716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 06:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:55.825 [2024-07-26 06:28:07.008752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 06:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:55.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:55.825 [2024-07-26 06:28:07.008942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 [2024-07-26 06:28:07.008976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 06:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:55.825 06:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:55.825 [2024-07-26 06:28:07.009138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 [2024-07-26 06:28:07.009171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 [2024-07-26 06:28:07.009316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 [2024-07-26 06:28:07.009352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 [2024-07-26 06:28:07.009604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 [2024-07-26 06:28:07.009664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.825 [2024-07-26 06:28:07.009855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.825 [2024-07-26 06:28:07.009887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.825 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.010032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.010075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.010231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.010267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.010423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.010457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.010613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.010647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.010857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.010893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.011051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.011092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.011222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.011273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.011447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.011494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.011663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.011698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.011877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.011916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.012128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.012162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.012293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.012326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.012482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.012519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.012672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.012704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.012838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.012871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.013005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.013039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.013192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.013257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.013431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.013466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.013625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.013658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.013851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.013910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.014089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.014123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.014262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.014295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.014431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.014465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.014621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.014654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.014795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.014829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.015019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.015084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.015254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.015287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.015470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.015507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.015697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.015732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.015890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.015930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.016115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.016153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.016304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.826 [2024-07-26 06:28:07.016343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.826 qpair failed and we were unable to recover it. 00:35:55.826 [2024-07-26 06:28:07.016528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.016561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.016745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.016777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.016981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.017014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.017164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.017197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.017323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.017373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.017573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.017627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.017801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.017835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.018043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.018088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.018289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.018348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.018540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.018580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.018734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.018768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.018953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.018989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.019155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.019188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.019367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.019403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.019616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.019672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.019837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.019869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.020029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.020067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.020228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.020264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.020438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.020470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.020653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.020689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.020932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.020999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.021191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.021223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.021381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.021418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.021647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.021702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.021858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.021890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.022049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.022105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.022244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.022279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.022458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.022490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.022638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.022674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.022865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.022897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.023057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.023097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.023253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.023287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.827 [2024-07-26 06:28:07.023455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.827 [2024-07-26 06:28:07.023490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.827 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.023650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.023682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.023868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.023902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.024097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.024133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.024298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.024364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.024518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.024552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.024760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.024799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.024982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.025018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.025197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.025241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.025376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.025410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.025548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.025600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.025785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.025831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.026027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.026076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.026277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.026318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.026461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.026505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.026653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.026687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.026866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.026899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.027050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.027092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.027265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.027320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.027491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.027528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.027666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.027701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.027842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.027875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.028075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.028109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.028269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.028303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.028479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.028520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.028680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.028713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.028905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.028942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.029078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.029115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.029255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.029293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.029460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.828 [2024-07-26 06:28:07.029493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.828 qpair failed and we were unable to recover it. 00:35:55.828 [2024-07-26 06:28:07.029675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.029746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.029931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.029964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.030130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.030166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.030328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.030362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.030520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.030553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.030692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.030725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.030895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.030949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.031143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.031177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.031357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.031391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.031554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.031588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.031776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.031808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.031970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.032003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.032165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.032212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.032384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.032419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.032600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.032644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.032809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.032851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.033011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.033050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.033225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.033258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.033455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.033494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.033652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.033686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.033832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.033865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.034054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.034093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.034224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.034257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.034386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.034422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.034619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.034653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.034836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.034889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.035104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.035138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.035279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.035312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.035499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.035531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.035690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.829 [2024-07-26 06:28:07.035723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.829 qpair failed and we were unable to recover it. 00:35:55.829 [2024-07-26 06:28:07.035875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.035908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.036115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.036149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.036280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.036314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.036506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.036539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.036698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.036731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.036888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.036925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.037093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.037127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.037285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.037318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.037496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.037544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.037740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.037774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.037942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.037975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.038107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.038142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.038299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.038335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.038509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.038552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.038734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.038772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.038970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.039009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.039190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.039233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.039408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.039440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.039573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.039611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.039770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.039803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.040001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.040035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.040199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.040247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.040454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.040490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.040645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.040679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.040860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.040894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.041027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.041082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.041220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.041253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.041408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.041446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.041634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.041668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.041831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.041864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.042046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.042087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.042229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.042267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.042425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.042458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.042616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.042653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.042834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.042872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.043016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.043053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.043242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.830 [2024-07-26 06:28:07.043275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.830 qpair failed and we were unable to recover it. 00:35:55.830 [2024-07-26 06:28:07.043450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.043487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.043624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.043656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.043788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.043826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.043990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.044023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.044202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.044236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.044426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.044459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.044613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.044645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.044780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.044812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.045003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.045040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.045244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.045277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.045405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.045437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.045605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.045646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.045806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.045843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.046005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.046038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.046221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.046256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.046395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.046430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.046590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.046624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.046782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.046815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.046961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.046994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.047177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.047223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.047440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.047475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.047633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.047666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.047803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.047835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.047990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.048024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.048171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.048205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.048364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.048413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.048614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.048648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.048822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.048856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.049040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.049080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.049237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.049270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.049423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.049457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.049639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.049677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.049810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.049849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.050012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.831 [2024-07-26 06:28:07.050045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.831 qpair failed and we were unable to recover it. 00:35:55.831 [2024-07-26 06:28:07.050217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.050251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.050396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.050429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.050595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.050628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.050760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.050793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.050966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.051020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.051224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.051265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.051422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.051467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.051610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.051645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.051779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.051827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.051985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.052018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.052200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.052235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.052393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.052426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.052588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.052621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.052757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.052807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.053023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.053056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.053227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.053259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.053417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.053451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.053638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.053675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.053808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.053842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.053980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.054014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.054150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.054183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.054319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.054354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.054512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.054548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.054716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.054748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.054905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.054938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.055097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.055131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.055272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.055305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.055443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.055476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.055679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.055712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.055878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.055912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.056046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.056104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.056272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.056305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.056462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.056496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.056633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.056666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.056831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.056864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.057029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.832 [2024-07-26 06:28:07.057069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.832 qpair failed and we were unable to recover it. 00:35:55.832 [2024-07-26 06:28:07.057234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.057267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.057401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.057435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.057600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.057633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.057843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.057879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.058087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.058123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.058289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.058322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.058501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.058534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.058677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.058710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.058862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.058896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.059088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.059126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.059296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.059333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.059517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.059552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.059747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.059780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.059940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.059973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.060150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.060185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.060349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.060382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.060572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.060609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.060792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.060827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.060973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.061006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.061167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.061200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.061392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.061425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.061589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.061629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.061831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.061867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.062033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.062076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.062224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.062257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.062440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.062472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.062631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.062664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.062791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.062829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.063009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.063045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.063248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.063280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.063439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.063471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.063605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.063638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.063803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.063840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.063986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.833 [2024-07-26 06:28:07.064022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.833 qpair failed and we were unable to recover it. 00:35:55.833 [2024-07-26 06:28:07.064206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.064239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.064385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.064418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.064576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.064611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.064765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.064800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.064929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.064962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.065125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.065159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.065324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.065358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.065493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.065526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.065665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.065699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.065855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.065895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.066083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.066117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.066246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.066280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.066435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.066469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.066600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.066633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.066792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.066828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.066990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.067025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.067175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.067210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.067372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.067405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.067561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.067594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.067778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.067811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.068007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.068040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.068220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.068253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.068406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.068439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.068615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.068648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.068778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.068824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.068965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.068998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.069201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.069239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.069416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.069461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.069622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.069654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.069836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.069869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.070037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.070077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.070212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.070244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.834 qpair failed and we were unable to recover it. 00:35:55.834 [2024-07-26 06:28:07.070398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.834 [2024-07-26 06:28:07.070431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.070559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.070592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.070767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.070800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.070953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.070990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.071171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.071205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.071338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.071371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.071532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.071565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.071722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.071755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.071939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.071975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.072150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.072184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.072313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.072363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.072515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.072548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.072715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.072749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.072906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.072939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.073106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.073139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.073276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.073313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.073452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.073488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.073647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.073683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.073820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.073852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.074009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.074046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.074217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.074250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.074407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.074443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.074619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.074652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.074797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.074830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.074958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.074991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.075118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.075151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.075341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.075374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.075530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.075563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.075708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.075741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.075893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.075929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.076116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.076149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.076300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.076349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.076499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.076533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.076668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.076701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.076886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.076919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.077081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.077118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.077278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.077311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.835 [2024-07-26 06:28:07.077481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.835 [2024-07-26 06:28:07.077514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.835 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.077672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.077705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.077860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.077893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.078076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.078109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.078265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.078297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.078434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.078467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.078669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.078706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.078856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.078891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.079033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.079074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.079234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.079267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.079434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.079468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.079628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.079662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.079854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.079891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.080052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.080093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.080218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.080251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.080412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.080445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.080633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.080666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.080839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.080877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.081079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.081113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.081313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.081359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.081489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.081522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.081657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.081701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.081885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.081918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.082065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.082097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.082232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.082266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.082430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.082466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.082654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.082686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.082853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.082885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.083047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.083086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.083275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.083308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.083471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.083504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.083639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.083672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.083810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.083844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.084001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.084034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.084227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.084275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.084473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.084509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.084722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.084788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.084991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.085031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.085236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.085274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.085439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.085473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.085659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.836 [2024-07-26 06:28:07.085695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.836 qpair failed and we were unable to recover it. 00:35:55.836 [2024-07-26 06:28:07.085847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.085884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.086081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.086131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.086283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.086317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.086484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.086518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.086673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.086706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.086913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.086952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.087140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.087176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.087337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.087387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.087618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.087657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.087840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.087891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.088083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.088135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.088301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.088334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.088499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.088551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.088730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.088768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.088953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.088985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.089145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.089179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.089336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.089370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.089568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.089604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.089781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.089818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.090002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.090035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.090229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.090263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.090402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.090435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.090596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.090633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.090841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.090879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.090966] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:55.837 [2024-07-26 06:28:07.091041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.091086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.091119] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:55.837 [2024-07-26 06:28:07.091260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.091304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.091488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.091523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.091700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.091740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.091886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.091922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.092085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.092138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.092277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.092311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.092517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.092550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.092795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.092831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.093036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.093076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.093259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.093292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.093482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.093518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.093664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.093706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.093882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.093923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.094130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.094163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.094312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.837 [2024-07-26 06:28:07.094352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.837 qpair failed and we were unable to recover it. 00:35:55.837 [2024-07-26 06:28:07.094561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.838 [2024-07-26 06:28:07.094597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.838 qpair failed and we were unable to recover it. 00:35:55.838 [2024-07-26 06:28:07.094813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.838 [2024-07-26 06:28:07.094851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.838 qpair failed and we were unable to recover it. 00:35:55.838 [2024-07-26 06:28:07.095022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.838 [2024-07-26 06:28:07.095071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.838 qpair failed and we were unable to recover it. 00:35:55.838 [2024-07-26 06:28:07.095274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.838 [2024-07-26 06:28:07.095307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.838 qpair failed and we were unable to recover it. 00:35:55.838 [2024-07-26 06:28:07.095500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.838 [2024-07-26 06:28:07.095537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.838 qpair failed and we were unable to recover it. 00:35:55.838 [2024-07-26 06:28:07.095696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.838 [2024-07-26 06:28:07.095746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:55.838 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.095921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.095961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.096124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.096174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.096364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.096397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.096558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.096602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.096760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.096794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.096951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.096988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.097157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.097200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.097363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.097398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.097582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.097616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.097783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.097820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.098035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.098076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.098257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.098294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.098473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.098509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.098662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.098694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.098855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.098905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.099052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.099094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.099250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.099284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.099506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.099543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.099718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.099755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.099905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.099938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.100096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.100129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.100265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.100298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.100459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.100493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.100668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.100706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.100903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.100950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.101163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.101199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.101371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.101408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.101595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.101654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.101833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.101866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.102012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.102048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.102237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.102279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.102466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.102498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.102668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.102705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.102887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.102919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.103102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.108 [2024-07-26 06:28:07.103134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.108 qpair failed and we were unable to recover it. 00:35:56.108 [2024-07-26 06:28:07.103289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.103325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.103460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.103506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.103664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.103697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.103868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.103904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.104074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.104127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.104348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.104383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.104558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.104595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.104852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.104910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.105082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.105115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.105297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.105334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.105613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.105672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.105855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.105889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.106074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.106111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.106298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.106337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.106552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.106584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.106768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.106803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.107014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.107046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.107244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.107276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.107403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.107453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.107668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.107728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.107969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.108002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.108200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.108243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.108400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.108447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.108613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.108649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.108784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.108837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.108984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.109033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.109216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.109264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.109435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.109470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.109638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.109673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.109817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.109852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.110020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.110053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.110229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.110263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.110466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.110498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.110676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.110724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.110903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.110938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.111106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.111147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.111328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.111364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.111603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.111663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.111961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.112019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.112183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.112218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.109 qpair failed and we were unable to recover it. 00:35:56.109 [2024-07-26 06:28:07.112394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.109 [2024-07-26 06:28:07.112446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.112654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.112704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.112871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.112904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.113064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.113097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.113270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.113321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.113500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.113551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.113764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.113815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.113985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.114032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.114194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.114250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.114423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.114458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.114641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.114679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.114850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.114883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.115067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.115115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.115267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.115302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.115518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.115555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.115803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.115860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.116043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.116086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.116225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.116258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.116437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.116473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.116618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.116655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.116921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.116979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.117185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.117232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.117395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.117461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.117658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.117711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.117863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.117915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.118054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.118094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.118298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.118349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.118556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.118614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.118878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.118935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.119097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.119155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.119337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.119384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.119516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.119551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.119811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.119867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.120072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.120124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.120308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.120375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.120557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.120601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.120802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.120860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.110 [2024-07-26 06:28:07.121040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.110 [2024-07-26 06:28:07.121083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.110 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.121259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.121292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.121475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.121513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.121768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.121826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.121981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.122014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.122180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.122213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.122386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.122422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.122670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.122706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.122906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.122942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.123102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.123136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.123268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.123301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.123430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.123481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.123652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.123727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.123955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.123993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.124181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.124214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.124369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.124436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.124671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.124729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.124917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.124978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.125149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.125184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.125315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.125347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.125516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.125548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.125735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.125796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.126004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.126041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.126229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.126262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.126419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.126455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.126708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.126740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.126956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.126992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.127168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.127202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.127357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.127389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.127534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.127570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.127805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.127859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.128047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.128086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.128225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.128258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.128511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.128586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.128877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.128919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.129072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.129134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.129302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.129335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.111 [2024-07-26 06:28:07.129475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.111 [2024-07-26 06:28:07.129508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.111 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.129727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.129771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.129978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.130017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.130189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.130222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.130401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.130451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.130727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.130785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.130998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.131034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.131195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.131228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.131493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.131567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.131823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.131880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.132086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.132136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.132321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.132371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.132555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.132589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.132741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.132780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.132962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.133000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.133200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.133233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.133425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.133489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.133775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.133836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.134054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.134115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.134259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.134291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.134491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.134527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.134789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.134825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.135035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.135086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.135262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.135309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.135494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.135530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.135755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.135815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.136022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.136079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.136242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.136277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.136435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.136482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.136647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.136682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.136808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.136841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.112 [2024-07-26 06:28:07.136994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.112 [2024-07-26 06:28:07.137027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.112 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.137190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.137237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.137378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.137413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.137579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.137630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.137812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.137844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.138006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.138039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.138220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.138252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.138446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.138498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.138655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.138690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.138831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.138888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.139091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.139147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.139281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.139314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.139486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.139519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.139767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.139823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.140000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.140032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.140198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.140230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.140443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.140495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.140655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.140690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.140878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.140940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.141148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.141181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.141351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.141385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.141581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.141614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.141780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.141825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.141985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.142017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.142215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.142263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.142478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.142531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.142698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.142734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.142920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.142984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.143181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.143214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.143370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.143402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.143620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.143688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.143864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.143900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.144082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.144116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.144276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.144318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.144561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.144617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.144831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.144864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.145071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.145107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.145296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.145344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.113 qpair failed and we were unable to recover it. 00:35:56.113 [2024-07-26 06:28:07.145563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.113 [2024-07-26 06:28:07.145598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.145738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.145772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.145958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.145992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.146175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.146209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.146392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.146439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.146599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.146634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.146832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.146866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.147049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.147095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.147270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.147302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.147450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.147482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.147620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.147653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.147818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.147851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.148011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.148048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.148220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.148268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.148441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.148477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.148645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.148679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.148801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.148834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.149000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.149052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.149222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.149255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.149420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.149472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.149607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.149643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.149817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.149850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.150054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.150096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.150286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.150333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.150537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.150572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.150787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.150848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.151004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.151041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.151233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.151267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.151426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.151479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.151759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.151815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.152004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.152036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.152203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.152236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.152406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.152458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.152642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.152677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.152835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.152904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.153127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.153161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.153322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.153355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.153501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.114 [2024-07-26 06:28:07.153536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.114 qpair failed and we were unable to recover it. 00:35:56.114 [2024-07-26 06:28:07.153712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.153747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.153928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.153961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.154115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.154148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.154297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.154362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.154523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.154557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.154722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.154755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.154893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.154926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.155057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.155095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.155233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.155264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.155462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.155497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.155679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.155711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.155853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.155889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.156090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.156141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.156297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.156328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.156561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.156618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.156838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.156874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.157072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.157105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.157289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.157322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.157549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.157581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.157713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.157744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.157873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.157923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.158138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.158171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.158341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.158373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.158513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.158545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.158769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.158802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.158959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.158991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.159134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.159167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.159369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.159405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.159591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.159623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.159781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.159813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.160031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.160081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.160263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.160295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.160470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.160506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.160710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.160746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.160919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.160952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.161088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.161121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.161276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.161308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.115 [2024-07-26 06:28:07.161471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.115 [2024-07-26 06:28:07.161503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.115 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.161661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.161697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.161880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.161912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.162106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.162138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.162268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.162305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.162492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.162528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.162689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.162723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.162902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.162938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.163123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.163156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.163286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.163328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.163508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.163544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.163707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.163743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.163928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.163960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.164100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.164132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.164336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.164372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.164575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.164607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.164743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.164792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.164993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.165028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.165248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.165281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.165466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.165501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.165771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.165825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.166038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.166078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.166214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.166246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.166427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.166463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.166671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.166703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.166860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.166896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.167129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.167165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.167329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.167362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.167494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.167526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.167689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.167721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.167937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.167969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.168172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.168205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.168410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.168445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.168628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.168660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.168843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.168878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.169048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.169096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.169312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.169344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.169516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.116 [2024-07-26 06:28:07.169551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.116 qpair failed and we were unable to recover it. 00:35:56.116 [2024-07-26 06:28:07.169724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.169760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.169952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.169983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.170160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.170196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.170366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.170402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.170550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.170582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.170737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.170788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.170992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.171032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.171223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.171255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.171430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.171466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.171608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.171643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.171855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.171887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.172028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.172070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.172214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.172249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.172431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.172463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.172635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.172671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.172831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.172866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.173045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.173117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.173262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.173294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.173453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.173503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.173701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.173733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.173863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.173912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.174099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.174136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.174299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.174331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.174524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.174561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.174743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.174786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.174963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.174995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 EAL: No free 2048 kB hugepages reported on node 1 00:35:56.117 [2024-07-26 06:28:07.175198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.175235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.175378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.175414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.175563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.175595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.175798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.175843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.176026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.176079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.176271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.176303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.117 qpair failed and we were unable to recover it. 00:35:56.117 [2024-07-26 06:28:07.176479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.117 [2024-07-26 06:28:07.176515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.176688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.176723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.176900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.176932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.177135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.177183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.177330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.177366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.177578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.177610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.177788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.177824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.177998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.178034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.178199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.178231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.178411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.178446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.178597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.178633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.178810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.178842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.178989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.179024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.179188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.179220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.179361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.179394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.179527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.179559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.179741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.179773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.179971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.180003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.180134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.180167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.180383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.180420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.180604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.180636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.180841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.180878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.181024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.181069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.181258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.181290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.181411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.181443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.181604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.181637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.181765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.181803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.181932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.181969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.182186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.182223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.182398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.182430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.182571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.182604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.182732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.182765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.182944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.182976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.183151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.183184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.183340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.183401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.183569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.183606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.183775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.183811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.183979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.118 [2024-07-26 06:28:07.184015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.118 qpair failed and we were unable to recover it. 00:35:56.118 [2024-07-26 06:28:07.184238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.184285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.184514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.184553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.184727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.184761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.184934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.184968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.185130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.185164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.185304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.185337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.185486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.185521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.185678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.185711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.185869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.185902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.186107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.186155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.186321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.186356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.186521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.186569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.186748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.186785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.186933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.186967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.187113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.187147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.187307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.187341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.187501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.187549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.187695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.187729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.187886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.187920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.188054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.188096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.188241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.188276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.188443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.188475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.188598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.188631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.188760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.188794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.188953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.188986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.189158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.189191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.189321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.189354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.189532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.189565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.189707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.189742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.189916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.189956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.190111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.190145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.190272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.190305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.190454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.190487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.190691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.190724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.190884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.190917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.191132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.191166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.191324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.191366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.119 qpair failed and we were unable to recover it. 00:35:56.119 [2024-07-26 06:28:07.191517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.119 [2024-07-26 06:28:07.191551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.191710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.191743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.191927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.191961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.192126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.192162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.192309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.192360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.192538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.192572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.192742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.192776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.192909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.192942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.193091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.193124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.193280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.193312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.193524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.193557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.193734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.193767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.193921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.193953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.194108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.194140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.194273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.194305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.194507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.194540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.194672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.194704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.194862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.194894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.195071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.195104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.195245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.195278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.195456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.195488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.195672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.195704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.195837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.195869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.196003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.196036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.196224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.196256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.196409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.196441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.196572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.196604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.196781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.196815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.196986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.197020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.197177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.197225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.197371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.197407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.197571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.197606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.197756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.197795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.197980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.198014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.198172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.198206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.198357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.198404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.198582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.198619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.198788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.198822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.198990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.199034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.199185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.120 [2024-07-26 06:28:07.199218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.120 qpair failed and we were unable to recover it. 00:35:56.120 [2024-07-26 06:28:07.199368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.199415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.199558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.199594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.199744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.199778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.199948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.199982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.200154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.200190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.200383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.200417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.200577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.200611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.200738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.200771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.200959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.200992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.201134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.201168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.201304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.201346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.201541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.201575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.201731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.201763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.201898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.201931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.202064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.202098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.202251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.202299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.202454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.202489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.202621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.202655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.202829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.202865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.203054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.203111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.203279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.203327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.203478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.203510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.203674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.203707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.203843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.203876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.204035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.204080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.204236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.204268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.204408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.204441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.204625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.204657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.204822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.204855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.204993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.205026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.205179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.205212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.205341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.205374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.205554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.205591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.205757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.205792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.206490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.206525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.121 qpair failed and we were unable to recover it. 00:35:56.121 [2024-07-26 06:28:07.206744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.121 [2024-07-26 06:28:07.206778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.206963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.206995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.207185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.207218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.207353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.207386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.207572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.207621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.207789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.207824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.207989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.208029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.208812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.208857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.209096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.209132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.209279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.209313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.209487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.209521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.209668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.209702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.209837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.209871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.210033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.210100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.210267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.210302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.210471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.210505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.210653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.210687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.210849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.210882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.211021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.211054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.211206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.211241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.211443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.211490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.211677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.211725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.211888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.211926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.212087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.212122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.212268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.212310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.212478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.212512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.212679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.212718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.212858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.212892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.213117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.213165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.213317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.213355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.213508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.213545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.213718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.213753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.213913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.213946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.214094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.214129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.214272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.214308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.214481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.214529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.214683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.214719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.214857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.214894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.215039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.215093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.215239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.215273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.215444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.122 [2024-07-26 06:28:07.215478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.122 qpair failed and we were unable to recover it. 00:35:56.122 [2024-07-26 06:28:07.215676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.215711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.215879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.215916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.216057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.216099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.216851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.216888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.217123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.217157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.217319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.217360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.217519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.217552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.217694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.217728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.217890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.217931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.218104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.218138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.218284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.218318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.218494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.218527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.218693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.218725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.218892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.218925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.219066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.219100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.219234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.219267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.219438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.219473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.219637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.219670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.219808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.219841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.219981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.220015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.220187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.220231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.220427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.220474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.220640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.220674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.220812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.220852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.221079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.221113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.221264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.221310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.221499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.221542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.221712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.221745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.221885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.221918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.222078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.222114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.222250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.222283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.222426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.222459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.222654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.222687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.222827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.222861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.223018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.223051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.223231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.223265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.223435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.223468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.223663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.223697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.223877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.223910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.224071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.224105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.224260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.123 [2024-07-26 06:28:07.224293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.123 qpair failed and we were unable to recover it. 00:35:56.123 [2024-07-26 06:28:07.224454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.224487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.224660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.224694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.224858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.224890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.225033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.225088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.225256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.225289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.225421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.225454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.225624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.225658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.225850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.225883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.226024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.226071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.226213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.226247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.226462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.226510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.226678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.226715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.226866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.226901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.227093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.227128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.227273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.227307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.227560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.227616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.227774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.227809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.227972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.228006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.228149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.228183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.228320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.228354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.228520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.228552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.228748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.228781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.228961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.229015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.229200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.229238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.229402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.229436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.229622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.229655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.229814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.229847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.229987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.230021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.230187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.230222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.230363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.230411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.230586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.230624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.230777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.230810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.230967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.231000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.231144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.231178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.231326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.231361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.231490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.231534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.231714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.231756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.231951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.231984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.232153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.232187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.232314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.232347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.124 qpair failed and we were unable to recover it. 00:35:56.124 [2024-07-26 06:28:07.232501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.124 [2024-07-26 06:28:07.232541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.232717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.232764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.232929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.232965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.233107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.233140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.233308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.233340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.233484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.233516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.233704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.233737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.233898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.233930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.234088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.234122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.234263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.234296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.234477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.234510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.234695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.234727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.234868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.234900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.235025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.235074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.235238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.235270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.235416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.235449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.235609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.235642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.235804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.235837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.235968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.236000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.236163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.236197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.236378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.236437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.236693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.236738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.236905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.236943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.237120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.237156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.237294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.237328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.237524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.237557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.237720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.237756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.237894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.237936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.238126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.238161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.238294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.238365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.238501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.238534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.238685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.238718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.238880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.238912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.239056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.239096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.239261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.239293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.239436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.239469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.239660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.239692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.239848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.239880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.240031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.240096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.125 [2024-07-26 06:28:07.240276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.125 [2024-07-26 06:28:07.240312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.125 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.240449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.240482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.240647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.240682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.240848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.240881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.241057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.241100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.241240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.241274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.241454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.241487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.241652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.241693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.241865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.241899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.242038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.242090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.242240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.242275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.242509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.242543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.242713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.242746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.242912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.242956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.243173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.243211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.243346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.243380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.243568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.243602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.243740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.243788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.243953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.243985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.244175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.244209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.244360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.244394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.244538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.244570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.244702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.244736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.244884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.244922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.245093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.245126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.245287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.245321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.245514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.245557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.245697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.245729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.245890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.245922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.246078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.246113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.246238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:56.126 [2024-07-26 06:28:07.246257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.246290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.246532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.246565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.246735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.246767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.246928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.246960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.247142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.247176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.247338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.247376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.247553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.247590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.247752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.247799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.247986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.248021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.248166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.248199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.248366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.248399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.126 [2024-07-26 06:28:07.248571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.126 [2024-07-26 06:28:07.248612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.126 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.248777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.248809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.248934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.248967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.249132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.249165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.249290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.249322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.249498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.249530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.249688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.249720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.249866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.249899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.250069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.250105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.250296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.250330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.250477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.250510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.250679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.250718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.250875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.250915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.251080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.251132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.251268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.251302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.251477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.251511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.251706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.251742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.251959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.251992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.252194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.252228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.252375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.252419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.252666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.252699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.252884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.252916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.253067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.253101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.253276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.253310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.253565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.253596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.253836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.253871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.254077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.254111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.254273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.254307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.254473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.254517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.254677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.254710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.254875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.254909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.255046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.255091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.255338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.255382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.255601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.255646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.255812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.255845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.256030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.256087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.256222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.256256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.256429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.256462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.256623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.256657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.256877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.256910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.257105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.257155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.257326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.257362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.257528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.257563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.257706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.127 [2024-07-26 06:28:07.257739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.127 qpair failed and we were unable to recover it. 00:35:56.127 [2024-07-26 06:28:07.257912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.257944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.258143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.258176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.258329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.258361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.258525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.258572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.258765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.258798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.258981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.259013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.259169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.259202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.259341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.259374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.259511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.259543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.259711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.259744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.259908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.259948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.260098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.260132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.260259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.260291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.260456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.260489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.260708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.260740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.260900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.260934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.261081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.261113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.261270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.261302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.261449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.261482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.261640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.261672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.261833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.261882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.262067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.262104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.262246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.262280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.262431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.262465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.262663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.262697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.262872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.262906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.263085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.263119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.263263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.263296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.263528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.263568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.263742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.263776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.264023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.264057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.264245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.264283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.264465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.264510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.264648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.264681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.264864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.264897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.265038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.265087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.265219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.265253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.265388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.265422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.265564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.265597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.265778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.265812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.265949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.265983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.266174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.266207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.266335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.266370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.266512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.266544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.266735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.266768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.266934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.128 [2024-07-26 06:28:07.266965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.128 qpair failed and we were unable to recover it. 00:35:56.128 [2024-07-26 06:28:07.267151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.267184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.267318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.267351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.267489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.267522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.267663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.267696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.267843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.267875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.268037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.268077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.268218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.268250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.268401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.268445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.268600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.268632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.268788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.268820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.268951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.268984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.269133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.269166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.269329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.269368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.269509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.269542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.269698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.269730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.269918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.269960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.270122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.270155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.270283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.270315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.270515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.270548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.270676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.270708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.270863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.270895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.271085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.271127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.271276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.271308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.271442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.271474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.271629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.271662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.271822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.271870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.272037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.272083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.272209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.272241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.272432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.272464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.272599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.272630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.272766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.272798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.272987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.273026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.273196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.273229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.273364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.273396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.273588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.273630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.273784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.273816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.273984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.274016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.274169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.274201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.274368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.274400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.274577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.274609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.274807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.274839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.129 [2024-07-26 06:28:07.274975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.129 [2024-07-26 06:28:07.275024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.129 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.275181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.275214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.275377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.275419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.275552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.275584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.275722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.275754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.275887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.275920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.276106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.276140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.276268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.276301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.276479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.276520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.276679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.276711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.276871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.276902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.277057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.277133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.277282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.277318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.277487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.277522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.277658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.277700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.277866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.277899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.278082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.278128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.278272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.278314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.278520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.278553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.278744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.278782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.278951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.278984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.279164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.279206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.279355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.279388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.279579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.279612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.279768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.279818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.279954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.279994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.280155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.280197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.280343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.280386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.280569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.280613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.280788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.280828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.280966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.280999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.281167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.281201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.281350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.281386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.281552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.281589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.281778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.281812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.281978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.282012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.282175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.282212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.282372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.282404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.282592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.282628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.282790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.282824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.283012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.283046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.283247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.283281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.283434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.283468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.283652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.283685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.283853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.283887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.284084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.284119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.284281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.284315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.130 qpair failed and we were unable to recover it. 00:35:56.130 [2024-07-26 06:28:07.284481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.130 [2024-07-26 06:28:07.284515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.284678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.284712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.284859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.284893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.285069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.285111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.285279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.285312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.285450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.285483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.285643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.285675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.285806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.285849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.286012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.286044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.286200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.286233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.286419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.286451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.286608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.286642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.286802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.286835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.287014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.287047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.287207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.287239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.287384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.287427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.287593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.287625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.287766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.287803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.287953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.287988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.288157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.288202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.288335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.288367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.288562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.288594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.288754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.288787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.288911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.288943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.289126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.289159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.289318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.289350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.289516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.289548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.289708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.289740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.289904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.289936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.290109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.290142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.290272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.290305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.290484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.290517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.290653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.290687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.290868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.290902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.291055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.291097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.291257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.291289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.291446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.291478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.291602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.291640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.291790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.291822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.291975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.292008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.292177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.292210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.292371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.292413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.292558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.292591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.292757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.292800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.292941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.292974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.293154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.131 [2024-07-26 06:28:07.293187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.131 qpair failed and we were unable to recover it. 00:35:56.131 [2024-07-26 06:28:07.293326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.293358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.293511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.293544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.293714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.293746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.293893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.293925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.294105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.294138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.294301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.294334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.294493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.294526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.294692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.294725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.294873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.294916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.295081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.295114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.295294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.295326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.295485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.295522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.295665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.295698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.295836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.295869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.296070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.296116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.296273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.296306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.296510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.296551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.296715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.296748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.296875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.296908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.297040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.297084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.297250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.297284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.297426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.297459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.297632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.297664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.297824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.297857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.297996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.298029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.298199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.298232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.298415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.298451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.298635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.298667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.298818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.298851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.298990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.299024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.299169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.299203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.299360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.299392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.299523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.299556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.299691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.299723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.299915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.299948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.300144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.300178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.300320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.300364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.300521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.300564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.300699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.300733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.300906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.300938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.301106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.301139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.301278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.301311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.301504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.301541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.301671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.301703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.301893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.301925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.302121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.302154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.302294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.302325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.302508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.132 [2024-07-26 06:28:07.302540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.132 qpair failed and we were unable to recover it. 00:35:56.132 [2024-07-26 06:28:07.302721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.302753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.302902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.302934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.303067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.303109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.303294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.303332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.303518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.303551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.303721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.303754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.303923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.303956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.304116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.304149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.304313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.304357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.304521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.304554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.304712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.304755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.304916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.304948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.305103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.305147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.305284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.305316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.305479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.305511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.305649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.305682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.305838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.305870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.306009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.306042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.306203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.306236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.306362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.306394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.306552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.306584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.306742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.306776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.306908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.306940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.307098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.307141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.307302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.307334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.307500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.307532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.307667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.307700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.307828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.307861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.308055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.308097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.308233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.308266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.308433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.308469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.308598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.308630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.308782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.308814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.308969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.309002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.309146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.309180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.309338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.309370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.309534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.309567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.309721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.309753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.309885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.309917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.310078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.310111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.310293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.310325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.310457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.310489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.310648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.310681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.310812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.133 [2024-07-26 06:28:07.310845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.133 qpair failed and we were unable to recover it. 00:35:56.133 [2024-07-26 06:28:07.310985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.311017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.311222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.311255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.311405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.311438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.311573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.311605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.311746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.311779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.311910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.311942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.312116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.312148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.312271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.312303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.312454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.312486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.312623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.312669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.312806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.312838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.312976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.313008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.313162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.313194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.313330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.313363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.313498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.313530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.313678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.313710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.313834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.313867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.314023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.314056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.314205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.314238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.314389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.314421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.314581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.314613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.314797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.314830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.314967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.314999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.315154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.315187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.315335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.315368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.315520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.315552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.315692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.315729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.315855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.315888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.316053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.316092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.316256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.316289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.316443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.316476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.316630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.316662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.316785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.316817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.316957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.316989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.317123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.317156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.317283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.317314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.317472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.317504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.317662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.317695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.317884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.317915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.318077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.318110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.318241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.318273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.318440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.318472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.318627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.318659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.318788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.318819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.318945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.318978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.319111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.319145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.319306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.319338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.134 [2024-07-26 06:28:07.319504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.134 [2024-07-26 06:28:07.319536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.134 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.319667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.319699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.319833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.319866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.320020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.320052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.320199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.320232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.320392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.320424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.320589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.320621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.320745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.320778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.320942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.320974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.321125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.321158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.321291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.321323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.321506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.321539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.321698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.321731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.321901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.321933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.322103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.322136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.322296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.322329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.322459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.322492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.322651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.322684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.322819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.322850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.323007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.323043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.323203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.323236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.323420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.323453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.323606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.323639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.323770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.323804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.323960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.323992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.324219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.324252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.324412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.324462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.324617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.324650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.324779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.324812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.324985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.325018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.325164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.325198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.325333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.325366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.325562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.325595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.325784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.325817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.325980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.326013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.326163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.326196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.326368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.326400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.326567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.326599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.326733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.326766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.326923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.326960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.327149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.327182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.327323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.327356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.327517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.327549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.327711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.327743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.327900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.327932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.328092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.328133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.328297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.328330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.328496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.328528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.328687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.328720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.328856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.328889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.135 [2024-07-26 06:28:07.329050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.135 [2024-07-26 06:28:07.329087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.135 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.329233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.329266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.329437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.329470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.329601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.329633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.329795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.329826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.329983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.330016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.330156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.330189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.330333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.330366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.330528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.330562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.330735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.330772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.330940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.330974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.331137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.331170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.331313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.331355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.331526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.331558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.331691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.331723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.331911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.331944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.332102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.332140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.332306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.332340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.332495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.332528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.332662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.332694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.332855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.332887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.333045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.333083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.333221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.333253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.333422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.333454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.333582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.333614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.333797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.333830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.333987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.334020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.334168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.334201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.334333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.334366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.334531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.334563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.334699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.334733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.334905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.334938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.335122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.335155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.335311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.335344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.335497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.335530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.335673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.335705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.335871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.335904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.336032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.336070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.336216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.336248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.336417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.336450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.336624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.336667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.336827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.336861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.337025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.337063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.337193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.337225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.337463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.337496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.337673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.337706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.337861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.337894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.338077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.338114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.338276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.338308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.338436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.338475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-07-26 06:28:07.338621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.136 [2024-07-26 06:28:07.338653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.338839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.338871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.339024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.339056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.339222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.339255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.339492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.339524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.339686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.339719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.339853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.339887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.340018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.340051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.340252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.340285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.340521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.340554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.340747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.340779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.340946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.340978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.341213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.341246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.341427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.341460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.341616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.341648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.341807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.341839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.341981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.342013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.342156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.342189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.342342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.342376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.342559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.342592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.342727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.342759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.342913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.342945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.343067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.343100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.343229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.343261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.343400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.343432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.343670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.343702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.343863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.343895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.344078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.344118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.344240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.344272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.344445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.344477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.344607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.344640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.344772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.344805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.344939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.344972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.345109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.345142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.345307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.345340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.345470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.345503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.345668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.345700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.345858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.345890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.346032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.346070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.346202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.346239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.346375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.346407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.346596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.346629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.346785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.346818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.346978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.137 [2024-07-26 06:28:07.347011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-07-26 06:28:07.347180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.347214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.347345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.347384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.347620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.347652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.347808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.347842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.348004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.348036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.348181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.348215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.348353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.348385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.348521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.348553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.348693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.348725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.348903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.348936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.349099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.349142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.349301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.349333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.349489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.349522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.349757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.349789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.349946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.349979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.350156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.350189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.350341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.350374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.350515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.350548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.350731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.350763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.350920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.350953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.351124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.351157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.351307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.351339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.351480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.351513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.351641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.351678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.351917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.351950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.352077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.352110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.352277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.352309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.352495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.352528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.352683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.352716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.352879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.352911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.353050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.353092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.353263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.353297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.353455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.353488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.353626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.353660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.353789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.353822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.354057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.354108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.354274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.354306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.354439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.354472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.354628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.354661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.354821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.354853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.354989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.355021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.355202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.355236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.355371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.355404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.355559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.355592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.355748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.355781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.355908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.355942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.356088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.356121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.356278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.356310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.356468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.356501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.356659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.356692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.356836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.356869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.357028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.138 [2024-07-26 06:28:07.357066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.138 [2024-07-26 06:28:07.357240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.357273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.357402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.357435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.357605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.357638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.357796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.357829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.357985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.358018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.358156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.358189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.358349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.358381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.358565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.358598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.358731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.358763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.358947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.358980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.359144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.359177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.359334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.359366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.359521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.359553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.359739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.359772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.359925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.359958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.360086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.360119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.360275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.360308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.360481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.360514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.360638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.360671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.360832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.360865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.361051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.361088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.361225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.361258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.361423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.361465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.361652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.361689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.361875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.361908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.362078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.362111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.362269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.362301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.362456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.362489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.362639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.362672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.362800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.362832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.363017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.363050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.363226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.363259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.363412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.363445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.363614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.363646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.363805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.363837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.363970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.364002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.364169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.364202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.364392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.364424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.364608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.364641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.364771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.364803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.364965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.364998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.365170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.365204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.365346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.365379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.365506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.365539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.365699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.365732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.365858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.365891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.366053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.366092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.366275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.366308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.366491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.366523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.366679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.366712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.366872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.366904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.367093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.367128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.367282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.367314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.367473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.367506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.139 qpair failed and we were unable to recover it. 00:35:56.139 [2024-07-26 06:28:07.367662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.139 [2024-07-26 06:28:07.367695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.367850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.367882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.368070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.368103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.368265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.368298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.368454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.368487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.368639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.368671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.368829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.368861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.369021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.369054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.369212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.369246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.369377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.369413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.369539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.369572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.369732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.369765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.369928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.369960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.370121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.370154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.370286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.370318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.370505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.370538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.370674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.370706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.370865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.370897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.371024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.371057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.371238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.371270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.371475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.371507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.371673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.371706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.371838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.371870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.372038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.372075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.372217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.372250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.372410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.372443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.372606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.372639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.372774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.372807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.372933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.372965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.373092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.373125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.373259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.373292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.373444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.373478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.373642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.373684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.373811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.373844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.374031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.374069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.374214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.374246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.374434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.374467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.374640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.374673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.374801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.374833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.374970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.375002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.375183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.375215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.375374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.375406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.375642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.375675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.375858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.375890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.376029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.376074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.376259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.376292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.376484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.376517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.376644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.376677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.376859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.376892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.377048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.140 [2024-07-26 06:28:07.377092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.140 qpair failed and we were unable to recover it. 00:35:56.140 [2024-07-26 06:28:07.377252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.377285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.377440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.377472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.377646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.377678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.377874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.377906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.378106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.378139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.378279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.378311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.378448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.378480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.378667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.378700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.378864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.378897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.379087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.379130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.379290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.379322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.379471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.379503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.379660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.379693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.379880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.379913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.380052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.380089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.380247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.380279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.380447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.380479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.380620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.380653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.380842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.380875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.381029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.381066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.381205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.381238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.381406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.381438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.381600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.381633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.381793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.381826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.381987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.382020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.382174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.382207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.382394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.382427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.382560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.382593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.382759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.382792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.382954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.382987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.383128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.383161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.383345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.383379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.383541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.383573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.383727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.383759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.383942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.383975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.384131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.384164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.384336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.384374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.384521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.384553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.384685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.384717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.384881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.384918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.385092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.385125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.385290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.385323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.385451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.385485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.385638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.385670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.385829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.385862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.386067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.386100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.386230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.386274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.386461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.386494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.386651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.386684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.386877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.386909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.387051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.387090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.387214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.387246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.387402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.387435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.387614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.387646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.387801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.387833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.141 qpair failed and we were unable to recover it. 00:35:56.141 [2024-07-26 06:28:07.387984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.141 [2024-07-26 06:28:07.388016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.388211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.388244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.388414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.388447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.388578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.388611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.388739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.388772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.388899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.388931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.389065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.389106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.389261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.389293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.389439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.389472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.389603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.389636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.389772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.389805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.389970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.390003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.390141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.390174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.390306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.390343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.390523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.390556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.390692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.390725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.390881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.390914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.391045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.391085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.391249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.391282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.391456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.391488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.391654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.391688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.391884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.391917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.392110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.392143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.392267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.392299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.392452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.392488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.392678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.392711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.392871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.392903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.393066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.393111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.393292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.393325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.393490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.393523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.393680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.393713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.393850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.393883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.394068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.394108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.394254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.394287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.394452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.394484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.394644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.394676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.394832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.394864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.394992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.395026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.395205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.395237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.395372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.395404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.395568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.395601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.395729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.395762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.395921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.395954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.396121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.396154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.396320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.396352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.396536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.396568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.396729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.396761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.396941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.396973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.397132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.397165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.397294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.397327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.397494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.397527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.397717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.397749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.397883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.397917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.398108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.398141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.398269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.398301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.398468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.398512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.142 qpair failed and we were unable to recover it. 00:35:56.142 [2024-07-26 06:28:07.398641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.142 [2024-07-26 06:28:07.398678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.398838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.398870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.399021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.399053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.399207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.399239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.399396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.399428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.399597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.399629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.399784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.399817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.399950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.399982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.400147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.400180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.400317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.400350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.400485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.400517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.400675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.400707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.400873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.400906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.401040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.401080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.401240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.401272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.401415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.401447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.401575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.401607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.401771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.401803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.401931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.401964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.402131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.402164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.402324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.402360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.402519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.402552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.402687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.402720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.402871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.402903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.403034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.403072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.403243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.403274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.403458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.403490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.403646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.403678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.403809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.403842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.404001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.404034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.404189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.404223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.404380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.404413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.404570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.404602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.404756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.404789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.404912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.404945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.405129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.405166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.405310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.405342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.405493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.405525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.405651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.405684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.405845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.405879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.406039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.406076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.406246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.406279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.406445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.406478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.406607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.406639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.406825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.406857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.406994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.407027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.407219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.407252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.407418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.407451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.407605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.407637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.407778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.407810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.407994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.408027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.408166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.143 [2024-07-26 06:28:07.408199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.143 qpair failed and we were unable to recover it. 00:35:56.143 [2024-07-26 06:28:07.408358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.408390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.408527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.408559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.408715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.408747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.408891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.408923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.409110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.409143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.409326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.409358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.409526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.409558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.409742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.409775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.409924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.409956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.410115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.410147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.410282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.410315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.410500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.410542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.410726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.410758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.410898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.410929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.411088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.411120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.411283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.411315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.411502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.411534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.411719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.411751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.411916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.411948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.412084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.412117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.412301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.412333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.412502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.412534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.412689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.412721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.412853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.412890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.413080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.413121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.413249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.413281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.413457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.413489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.413662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.413694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.413859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.413891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.414053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.414090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.414224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.414256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.414398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.414431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.414589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.414622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.414754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.414786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.414975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.415007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.415184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.415217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.415377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.415409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.415568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.415600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.415759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.415792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.415930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.415962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.416082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.416116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.416269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.416301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.416464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.416497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.416657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.416689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.416820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.416852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.417015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.417047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.417209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.417242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.417434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.417466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.417625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.417658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.417844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.417876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.418014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.418047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.418249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.418282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.418418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.418451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.418642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.418675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.418839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.418871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.419025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.419057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.419192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.419224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.419362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.419395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.419531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.144 [2024-07-26 06:28:07.419563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.144 qpair failed and we were unable to recover it. 00:35:56.144 [2024-07-26 06:28:07.419708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.419740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.419922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.419954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.420086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.420130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.420285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.420317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.420453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.420492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.420650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.420682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.420834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.420866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.420994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.421026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.421187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.421219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.421402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.421434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.421562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.421594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.421754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.421786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.421920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.421953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.422106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.422138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.422303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.422335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.422504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.422537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.422690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.422732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.422873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.422906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.423068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.423106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.423239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.423271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.423431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.423463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.423619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.423652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.423833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.423865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.423989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.424021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.424194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.424227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.424359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.424391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.424520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.424552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.424685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.424718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.424881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.424914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.425071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.425104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.425264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.425296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.425438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.425472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.425651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.425684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.425812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.425844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.426008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.426040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.426204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.426237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.426400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.426432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.426590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.426624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.426808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.426840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.426999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.427032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.427196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.427228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.427357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.427389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.427519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.427553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.427710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.427742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.427875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.427912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.428044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.428095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.428222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.428254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.428413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.428445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.145 [2024-07-26 06:28:07.428583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.145 [2024-07-26 06:28:07.428615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.145 qpair failed and we were unable to recover it. 00:35:56.428 [2024-07-26 06:28:07.428752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.428 [2024-07-26 06:28:07.428784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.428 qpair failed and we were unable to recover it. 00:35:56.428 [2024-07-26 06:28:07.428921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.428 [2024-07-26 06:28:07.428954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.428 qpair failed and we were unable to recover it. 00:35:56.428 [2024-07-26 06:28:07.429089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.428 [2024-07-26 06:28:07.429122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.428 qpair failed and we were unable to recover it. 00:35:56.428 [2024-07-26 06:28:07.429297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.428 [2024-07-26 06:28:07.429329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.428 qpair failed and we were unable to recover it. 00:35:56.428 [2024-07-26 06:28:07.429489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.428 [2024-07-26 06:28:07.429522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.428 qpair failed and we were unable to recover it. 00:35:56.428 [2024-07-26 06:28:07.429684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.428 [2024-07-26 06:28:07.429716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.428 qpair failed and we were unable to recover it. 00:35:56.428 [2024-07-26 06:28:07.429850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.428 [2024-07-26 06:28:07.429881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.428 qpair failed and we were unable to recover it. 00:35:56.428 [2024-07-26 06:28:07.430042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.428 [2024-07-26 06:28:07.430081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.428 qpair failed and we were unable to recover it. 00:35:56.428 [2024-07-26 06:28:07.430214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.428 [2024-07-26 06:28:07.430246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.428 qpair failed and we were unable to recover it. 00:35:56.428 [2024-07-26 06:28:07.430416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.428 [2024-07-26 06:28:07.430449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.428 qpair failed and we were unable to recover it. 00:35:56.428 [2024-07-26 06:28:07.430614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.428 [2024-07-26 06:28:07.430646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.428 qpair failed and we were unable to recover it. 00:35:56.428 [2024-07-26 06:28:07.430804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.428 [2024-07-26 06:28:07.430837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.428 qpair failed and we were unable to recover it. 00:35:56.428 [2024-07-26 06:28:07.431000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.428 [2024-07-26 06:28:07.431032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.428 qpair failed and we were unable to recover it. 00:35:56.428 [2024-07-26 06:28:07.431167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.428 [2024-07-26 06:28:07.431199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.428 qpair failed and we were unable to recover it. 00:35:56.428 [2024-07-26 06:28:07.431332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.428 [2024-07-26 06:28:07.431364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.428 qpair failed and we were unable to recover it. 00:35:56.428 [2024-07-26 06:28:07.431530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.428 [2024-07-26 06:28:07.431563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.428 qpair failed and we were unable to recover it. 00:35:56.428 [2024-07-26 06:28:07.431729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.428 [2024-07-26 06:28:07.431761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.428 qpair failed and we were unable to recover it. 00:35:56.428 [2024-07-26 06:28:07.431894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.428 [2024-07-26 06:28:07.431927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.428 qpair failed and we were unable to recover it. 00:35:56.428 [2024-07-26 06:28:07.432090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.428 [2024-07-26 06:28:07.432123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.432252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.432284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.432413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.432445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.432629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.432661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.432800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.432833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.432964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.432997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.433171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.433204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.433342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.433374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.433513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.433546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.433703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.433736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.433869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.433901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.434027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.434065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.434231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.434264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.434432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.434495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.434652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.434684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.434873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.434906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.435091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.435124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.435259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.435295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.435456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.435489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.435653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.435686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.435815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.435848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.435972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.436004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.436150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.436182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.436310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.436343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.436502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.436535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.436731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.436763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.436915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.436947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.437128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.437161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.437287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.437319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.437481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.437514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.437666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.437699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.437839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.437871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.438029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.438067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.438257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.438290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.438447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.438480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.438647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.438680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.438806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.438839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.429 [2024-07-26 06:28:07.439003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.429 [2024-07-26 06:28:07.439036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.429 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.439204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.439236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.439402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.439435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.439595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.439628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.439781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.439815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.439942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.439975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.440162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.440195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.440387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.440420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.440574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.440607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.440803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.440835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.440994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.441027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.441199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.441231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.441386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.441419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.441613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.441645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.441810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.441842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.441973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.442005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.442198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.442231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.442376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.442408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.442606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.442639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.442797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.442830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.442959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.442996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.443128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.443161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.443296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.443328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.443485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.443518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.443676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.443709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.443865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.443898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.444091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.444124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.444266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.444299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.444463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.444495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.444648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.444681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.444833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.444866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.444995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.445028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.445168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.445201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.445360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.445393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.445580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.445612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.445797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.445830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.430 [2024-07-26 06:28:07.445988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.430 [2024-07-26 06:28:07.446020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.430 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.446166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.446198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.446331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.446364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.446582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.446615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.446756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.446798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.446930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.446963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.447117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.447150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.447308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.447341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.447526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.447558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.447711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.447743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.447904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.447936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.448101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.448133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.448292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.448325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.448482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.448514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.448640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.448672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.448837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.448869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.449028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.449074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.449203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.449236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.449393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.449425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.449552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.449585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.449720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.449753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.449905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.449937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.450072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.450105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.450293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.450328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.450494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.450531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.450715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.450747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.450887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.450919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.451091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.451124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.451308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.451341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.451494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.451526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.451707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.451739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.451866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.451899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.452085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.452118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.452255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.452290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.452458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.431 [2024-07-26 06:28:07.452491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.431 qpair failed and we were unable to recover it. 00:35:56.431 [2024-07-26 06:28:07.452655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.452688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.452852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.452885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.453072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.453113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.453248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.453281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.453413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.453446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.453610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.453644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.453826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.453859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.454014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.454047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.454210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.454242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.454377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.454409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.454574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.454608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.454740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.454773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.454929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.454962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.455124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.455158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.455344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.455376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.455510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.455541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.455711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.455744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.455909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.455942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.456108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.456140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.456283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.456315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.456475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.456508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.456668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.456700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.456831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.456864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.457007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.457040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.457196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.457229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.457387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.457421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.457557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.457589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.457775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.457809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.457941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.457974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.458134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.458172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.458338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.458370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.458549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.458581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.458747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.458780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.458949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.458994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.459160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.459193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.459327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.432 [2024-07-26 06:28:07.459363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.432 qpair failed and we were unable to recover it. 00:35:56.432 [2024-07-26 06:28:07.459534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.459566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.459702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.459735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.459898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.459937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.460066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.460098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.460240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.460273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.460461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.460493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.460668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.460701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.460863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.460897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.461083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.461115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.461251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.461284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.461450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.461483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.461640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.461673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.461869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.461901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.462049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.462099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.462276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.462308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.462469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.462501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.462656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.462688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.462852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.462885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.463017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.463049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.463193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.463225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.463362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.463395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.463562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.463594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.463751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.463784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.463967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.463999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.464127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.464163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.464332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.464364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.464492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.433 [2024-07-26 06:28:07.464524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.433 qpair failed and we were unable to recover it. 00:35:56.433 [2024-07-26 06:28:07.464650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.464682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.464844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.464877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.465013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.465045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.465243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.465276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.465462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.465494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.465648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.465680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.465845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.465883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.466010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.466043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.466247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.466280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.466438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.466470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.466624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.466657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.466815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.466847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.466981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.467013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.467208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.467240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.467372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.467404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.467566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.467600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.467787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.467819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.467957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.467990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.468133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.468166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.468322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.468354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.468497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.468529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.468710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.468742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.468901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.468934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.469090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.469123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.469279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.469311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.469452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.469484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.469642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.469675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.469808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.469841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.470040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.470090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.470239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.470271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.470429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.470461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.470597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.470629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.470789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.470821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.470959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.470993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.471124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.471166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.471324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.471356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.434 [2024-07-26 06:28:07.471521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.434 [2024-07-26 06:28:07.471554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.434 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.471710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.471743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.471926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.471958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.472087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.472119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.472300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.472333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.472479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.472511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.472664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.472696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.472851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.472883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.473009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.473042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.473178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.473215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.473373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.473409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.473545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.473577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.473728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.473760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.473894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.473927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.474066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.474099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.474223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.474256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.474440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.474472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.474656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.474688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.474842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.474875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.475071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.475104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.475240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.475272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.475472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.475504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.475627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.475659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.475815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.475847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.475979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.476012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.476166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.476198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.476358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.476391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.476519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.476552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.476713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.476745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.476904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.476936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.477083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.477117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.477250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.477283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.477442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.477474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.477632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.477664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.477824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.477856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.435 qpair failed and we were unable to recover it. 00:35:56.435 [2024-07-26 06:28:07.477990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.435 [2024-07-26 06:28:07.478022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.478187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.478220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.478357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.478390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.478548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.478581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.478752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.478785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.478917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.478950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.479140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.479173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.479303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.479335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.479491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.479524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.479679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.479712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.479876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.479908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.480046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.480085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.480245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.480278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.480432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.480465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.480590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.480623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.480809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.480846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.480999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.481032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.481176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.481209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.481368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.481400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.481557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.481589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.481743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.481776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.481910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.481942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.482100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.482133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.482297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.482329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.482476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.482508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.482642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.482674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.482814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.482847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.483004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.483047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.483243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.483275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.483441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.483475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.483630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.483662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.483821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.483854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.484018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.484050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.484191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.484223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.436 [2024-07-26 06:28:07.484373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.436 [2024-07-26 06:28:07.484405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.436 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.484586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.484618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.484745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.484777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.484908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.484941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.485100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.485138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.485291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.485322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.485485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.485517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.485674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.485706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.485847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.485880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.486016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.486048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.486222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.486254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.486411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.486443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.486605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.486638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.486779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.486812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.486966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.486999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.487134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.487166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.487352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.487384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.487543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.487577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.487741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.487773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.487918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.487950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.488087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.488128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.488289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.488326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.488468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.488500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.488626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.488658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.488783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.488815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.488977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.489009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.489211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.489244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.489397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.489430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.489593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.489625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.489754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.489787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.489961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.489993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.490152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.490184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.490354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.490386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.490515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.490547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.490710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.490743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.437 qpair failed and we were unable to recover it. 00:35:56.437 [2024-07-26 06:28:07.490931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.437 [2024-07-26 06:28:07.490963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.491140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.491172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.491333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.491376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.491505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.491537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.491693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.491726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.491882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.491914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.492073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.492106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.492273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.492305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.492467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.492499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.492641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.492673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.492859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.492891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.493017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.493049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.493219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.493252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.493408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.493440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.493627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.493659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.493801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.493833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.493952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.493984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.494116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.494149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.494311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.494343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.494493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.494526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.494686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.494719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.494857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.494889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.495023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.495072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.495267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.495299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.495471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.495503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.495655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.495688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.495845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.495877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.496035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.496075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.496247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.496279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.496470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.438 [2024-07-26 06:28:07.496502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.438 qpair failed and we were unable to recover it. 00:35:56.438 [2024-07-26 06:28:07.496626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.496660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.496815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.496848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.497005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.497039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.497211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.497249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.497412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.497444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.497604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.497636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.497821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.497854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.497982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.498014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.498190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.498223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.498351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.498388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.498564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.498597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.498784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.498816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.498925] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:56.439 [2024-07-26 06:28:07.498973] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:56.439 [2024-07-26 06:28:07.498984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.498999] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:56.439 [2024-07-26 06:28:07.499017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe8[2024-07-26 06:28:07.499019] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:56.439 0 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.499041] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:56.439 [2024-07-26 06:28:07.499216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.499248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.499290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:35:56.439 [2024-07-26 06:28:07.499327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:35:56.439 [2024-07-26 06:28:07.499351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:35:56.439 [2024-07-26 06:28:07.499393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.499362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:35:56.439 [2024-07-26 06:28:07.499426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.499579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.499612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.499770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.499802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.499937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.499970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.500132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.500165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.500298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.500331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.500490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.500522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.500664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.500696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.500859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.500891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.501090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.501132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.501262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.501295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.501447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.501480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.501622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.501655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.501797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.501830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.501965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.501997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.502131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.502163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.502324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.502365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.502529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.502561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.502684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.502717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.502861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.439 [2024-07-26 06:28:07.502898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.439 qpair failed and we were unable to recover it. 00:35:56.439 [2024-07-26 06:28:07.503033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.503071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.503216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.503248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.503457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.503490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.503632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.503665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.503814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.503847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.504009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.504041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.504209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.504241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.504412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.504450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.504584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.504616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.504781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.504815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.504975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.505007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.505161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.505195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.505352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.505385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.505539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.505571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.505699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.505732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.505859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.505892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.506026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.506067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.506222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.506254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.506420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.506453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.506582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.506614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.506770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.506803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.506944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.506977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.507120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.507166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.507297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.507329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.507535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.507568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.507707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.507741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.507907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.507940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.508075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.508119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.508264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.508296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.508463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.508495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.508633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.508666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.508862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.508894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.509020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.509053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.509222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.509254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.509417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.509450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.509584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.509616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.509782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.440 [2024-07-26 06:28:07.509815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.440 qpair failed and we were unable to recover it. 00:35:56.440 [2024-07-26 06:28:07.509971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.510003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.510203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.510236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.510370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.510407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.510530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.510563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.510731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.510764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.510920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.510953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.511090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.511123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.511282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.511315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.511476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.511508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.511642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.511675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.511831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.511863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.512002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.512035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.512186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.512218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.512383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.512415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.512574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.512607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.512740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.512773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.512941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.512974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.513133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.513166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.513296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.513328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.513463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.513497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.513669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.513701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.513838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.513871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.514011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.514044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.514207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.514240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.514387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.514419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.514641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.514674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.514848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.514880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.515009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.515041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.515187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.515220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.515408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.515441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.515577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.515611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.515769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.515802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.515947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.515979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.516142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.516176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.516347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.516379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.516551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.516584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.441 [2024-07-26 06:28:07.516755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.441 [2024-07-26 06:28:07.516788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.441 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.516925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.516958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.517131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.517164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.517293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.517326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.517487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.517520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.517679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.517711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.517848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.517885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.518026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.518065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.518254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.518286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.518445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.518477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.518675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.518707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.518855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.518889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.519118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.519152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.519310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.519354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.519519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.519552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.519711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.519743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.519883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.519915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.520043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.520084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.520227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.520259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.520403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.520435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.520609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.520641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.520771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.520804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.520932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.520964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.521096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.521134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.521305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.521338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.521464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.521497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.521666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.521699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.521871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.521904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.522041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.522079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.522252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.522286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.522458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.522490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.522639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.522673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.522812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.522845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.522991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.523024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.523159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.523192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.523350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.523383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.523515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.442 [2024-07-26 06:28:07.523548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.442 qpair failed and we were unable to recover it. 00:35:56.442 [2024-07-26 06:28:07.523703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.523735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.523864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.523897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.524051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.524089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.524222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.524255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.524415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.524447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.524581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.524614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.524781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.524815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.524952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.524985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.525144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.525178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.525306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.525343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.525514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.525547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.525691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.525724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.525876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.525908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.526070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.526103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.526272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.526304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.526433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.526465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.526600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.526633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.526772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.526806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.526932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.526964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.527152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.527185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.527334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.527367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.527492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.527524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.527711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.527743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.527878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.527910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.528070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.528102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.528228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.528261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.528399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.528432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.528598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.528630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.528767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.528799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.528963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.528995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.529172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.529206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.529344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.529377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.529515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.443 [2024-07-26 06:28:07.529548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.443 qpair failed and we were unable to recover it. 00:35:56.443 [2024-07-26 06:28:07.529687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.529720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.529851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.529884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.530018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.530050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.530217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.530249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.530386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.530418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.530582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.530615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.530791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.530825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.530974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.531017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.531149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.531182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.531338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.531370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.531529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.531561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.531716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.531748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.531873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.531906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.532068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.532101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.532233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.532265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.532441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.532473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.532603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.532642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.532781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.532814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.532976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.533009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.533147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.533180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.533317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.533351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.533484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.533516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.533647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.533679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.533834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.533867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.534007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.534040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.534217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.534250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.534415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.534448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.534585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.534618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.534803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.534836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.534994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.535026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.535201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.535256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.535460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.535496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.535648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.535683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.535858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.535892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.536028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.536071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.536260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.536294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.536448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.536482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.444 [2024-07-26 06:28:07.536646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.444 [2024-07-26 06:28:07.536679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.444 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.536811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.536845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.537007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.537056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.537212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.537249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.537395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.537429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.537589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.537623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.537786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.537820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.537943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.537976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.538108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.538141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.538277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.538311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.538473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.538505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.538666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.538698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.538832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.538865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.539008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.539040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.539201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.539249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.539429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.539465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.539596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.539630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.539776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.539814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.539956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.539990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.540156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.540195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.540361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.540395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.540535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.540568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.540733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.540766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.540934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.540985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.541147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.541183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.541361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.541399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.541576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.541623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.541765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.541800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.541931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.541965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.542118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.542152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.542287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.542320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.542467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.542501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.542647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.542687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.542856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.542903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.543069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.543106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.543291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.543326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.543457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.543491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.445 [2024-07-26 06:28:07.543658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.445 [2024-07-26 06:28:07.543694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.445 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.543939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.543972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.544134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.544167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.544304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.544337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.544503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.544537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.544693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.544727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.544856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.544889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.545049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.545088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.545244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.545281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.545469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.545503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.545679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.545727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.545880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.545915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.546075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.546111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.546283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.546316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.546487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.546520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.546696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.546730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.546860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.546905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.547089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.547146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.547310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.547345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.547509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.547543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.547694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.547727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.547864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.547898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.548068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.548110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.548245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.548278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.548424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.548472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.548652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.548688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.548822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.548856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.549020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.549053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.549196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.549229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.549360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.549392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.549541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.549574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.549764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.549798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.549940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.549973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.550128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.550162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.550307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.550353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.550524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.550559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.550702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.550734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.550884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.446 [2024-07-26 06:28:07.550917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.446 qpair failed and we were unable to recover it. 00:35:56.446 [2024-07-26 06:28:07.551076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.551110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.551242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.551275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.551442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.551475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.551603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.551635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.551762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.551794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.551948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.551981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.552117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.552150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.552310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.552356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.552496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.552531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.552693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.552727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.552858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.552891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.553021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.553053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.553244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.553291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.553447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.553483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.553644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.553680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.553869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.553902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.554072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.554107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.554249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.554283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.554417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.554449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.554607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.554640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.554777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.554811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.555052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.555097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.555261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.555294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.555429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.555462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.555617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.555650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.555783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.555815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.555985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.447 [2024-07-26 06:28:07.556018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.447 qpair failed and we were unable to recover it. 00:35:56.447 [2024-07-26 06:28:07.556176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.556223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.556396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.556443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.556587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.556621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.556792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.556830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.556969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.557003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.557169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.557203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.557337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.557370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.557526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.557559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.557716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.557749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.557872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.557905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.558089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.558123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.558264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.558299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.558452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.558488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.558650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.558683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.558818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.558851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.559010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.559043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.559191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.559224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.559378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.559411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.559533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.559566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.559725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.559759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.560001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.560034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.560175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.560210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.560340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.560374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.560532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.560565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.560693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.560731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.560884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.560916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.561048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.561091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.561231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.561264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.561451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.561498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.561674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.561709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.561845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.561878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.562023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.562074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.562245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.562278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.562437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.562484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.562621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.562656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.562783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.562816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.562952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.448 [2024-07-26 06:28:07.562987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.448 qpair failed and we were unable to recover it. 00:35:56.448 [2024-07-26 06:28:07.563192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.563228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.563377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.563411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.563571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.563604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.563758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.563791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.563956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.563990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.564146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.564179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.564367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.564414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.564555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.564590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.564771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.564807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.564967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.565000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.565152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.565185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.565321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.565355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.565490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.565524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.565659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.565692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.565828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.565861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.566018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.566051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.566212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.566259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.566405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.566439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.566585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.566622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.566786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.566820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.566958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.566991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.567140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.567174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.567328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.567361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.567512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.567545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.567678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.567712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.567885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.567918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.568083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.568118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.568291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.568329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.568536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.568573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.568710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.568756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.568895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.568928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.569055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.569094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.569253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.569286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.569446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.569478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.569643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.569676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.569808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.569840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.570023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.570056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.449 qpair failed and we were unable to recover it. 00:35:56.449 [2024-07-26 06:28:07.570199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.449 [2024-07-26 06:28:07.570232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.570479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.570512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.570684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.570716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.570874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.570907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.571099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.571133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.571266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.571298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.571434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.571469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.571614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.571647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.571822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.571855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.572001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.572034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.572217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.572264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.572434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.572482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.572687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.572722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.572923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.572959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.573156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.573190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.573380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.573413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.573560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.573593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.573730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.573763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.574017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.574051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.574194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.574228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.574373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.574406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.574563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.574596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.574782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.574815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.574954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.574988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.575155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.575189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.575327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.575362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.575523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.575557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.575746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.575779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.575917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.575949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.576083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.576116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.576255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.576294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.576424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.576456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.576644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.576676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.576810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.576844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.576998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.577031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.577166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.577200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.577338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.450 [2024-07-26 06:28:07.577371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.450 qpair failed and we were unable to recover it. 00:35:56.450 [2024-07-26 06:28:07.577504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.577537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.577665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.577698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.577854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.577887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.578036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.578075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.578281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.578315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.578498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.578536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.578700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.578734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.578884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.578919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.579086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.579120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.579283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.579317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.579441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.579474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.579630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.579663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.579808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.579842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.579981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.580014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.580193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.580227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.580368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.580402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.580594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.580628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.580759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.580792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.580980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.581013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.581151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.581185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.581323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.581356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.581490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.581523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.581667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.581701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.581860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.581893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.582033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.582070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.582239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.582272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.582405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.582439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.582596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.582629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.582801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.582834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.582992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.583025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.583193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.583242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.583384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.583419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.583571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.583608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.583748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.583786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.583928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.583962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.584117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.584152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.584287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.584320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.451 qpair failed and we were unable to recover it. 00:35:56.451 [2024-07-26 06:28:07.584480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.451 [2024-07-26 06:28:07.584513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.584706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.584739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.584994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.585028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.585229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.585274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.585410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.585444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.585608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.585642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.585804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.585837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.586083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.586117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.586248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.586281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.586455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.586491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.586701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.586742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.586878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.586912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.587074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.587108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.587272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.587305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.587445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.587478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.587643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.587691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.587857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.587892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.588035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.588076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.588330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.588363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.588529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.588563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.588705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.588738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.588882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.588914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.589115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.589148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.589307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.589341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.589531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.589582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.589737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.589772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.589901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.589934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.590131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.590165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.590303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.590336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.590500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.590532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.590667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.590701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.590838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.590870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.452 qpair failed and we were unable to recover it. 00:35:56.452 [2024-07-26 06:28:07.591030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.452 [2024-07-26 06:28:07.591077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.591224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.591257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.591395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.591428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.591588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.591622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.591752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.591790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.592031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.592073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.592216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.592250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.592394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.592428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.592584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.592617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.592746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.592780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.592927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.592961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.593131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.593165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.593329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.593363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.593526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.593559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.593723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.593756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.593894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.593927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.594066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.594099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.594249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.594281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.594432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.594481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.594651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.594683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.594812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.594844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.594974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.595005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.595147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.595180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.595330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.595364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.595526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.595558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.595690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.595722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.595851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.595884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.596035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.596084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.596229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.596264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.596473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.596506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.596644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.596677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.596839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.596873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.597008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.597041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.597206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.597254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.597416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.597452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.597594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.597628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.597791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.597838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.598020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.453 [2024-07-26 06:28:07.598056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.453 qpair failed and we were unable to recover it. 00:35:56.453 [2024-07-26 06:28:07.598244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.598278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.598434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.598468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.598634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.598666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.598808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.598844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.598974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.599007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.599173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.599208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.599347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.599385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.599588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.599621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.599779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.599813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.599954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.599987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.600124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.600159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.600321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.600355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.600491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.600537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.600702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.600735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.600879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.600912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.601052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.601091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.601251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.601283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.601445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.601478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.601614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.601647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.601846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.601882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.602031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.602071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.602234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.602268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.602418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.602466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.602622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.602658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.602817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.602851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.602983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.603017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.603217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.603250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.603390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.603423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.603555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.603589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.603719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.603752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.603888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.603922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.604090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.604126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.604267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.604301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.604449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.604482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.604663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.604700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.604844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.604879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.605018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.454 [2024-07-26 06:28:07.605052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.454 qpair failed and we were unable to recover it. 00:35:56.454 [2024-07-26 06:28:07.605207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.605240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.605398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.605431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.605568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.605601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.605792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.605825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.605958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.605991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.606157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.606191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.606333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.606367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.606530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.606563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.606728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.606780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.606958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.606999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.607154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.607190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.607324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.607358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.607551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.607584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.607750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.607786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.607957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.607990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.608128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.608161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.608322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.608354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.608491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.608523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.608654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.608686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.608814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.608846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.609027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.609066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.609223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.609258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.609417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.609465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.609643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.609677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.609867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.609903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.610031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.610073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.610215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.610249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.610383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.610416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.610558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.610593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.610724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.610757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.610937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.610976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.611136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.611170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.611337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.611371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.611534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.611568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.611700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.611733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.611896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.611929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.612077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.455 [2024-07-26 06:28:07.612114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.455 qpair failed and we were unable to recover it. 00:35:56.455 [2024-07-26 06:28:07.612292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.612354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.612508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.612544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.612711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.612744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.612989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.613022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.613169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.613201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.613335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.613369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.613534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.613568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.613733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.613767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.613922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.613955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.614090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.614125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.614259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.614292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.614473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.614507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.614639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.614678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.614835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.614869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.615108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.615142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.615283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.615316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.615445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.615479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.615640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.615673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.615861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.615895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.616041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.616083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.616226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.616259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.616420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.616459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.616604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.616639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.616811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.616849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.616987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.617021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.617172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.617207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.617362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.617396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.617551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.617585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.617721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.617762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.617933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.617967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.618174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.618208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.618357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.618400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.618538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.618571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.618756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.618789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.618922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.618955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.619097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.619131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.456 [2024-07-26 06:28:07.619281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.456 [2024-07-26 06:28:07.619327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.456 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.619478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.619514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.619647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.619681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.619843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.619876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.620022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.620056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.620204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.620238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.620370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.620404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.620542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.620576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.620717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.620752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.620879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.620912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.621098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.621132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.621263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.621296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.621458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.621492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.621632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.621666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.621832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.621865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.622020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.622075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.622223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.622264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.622437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.622483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.622629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.622664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.622812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.622848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.623006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.623040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.623192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.623226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.623361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.623394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.623527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.623560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.623722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.623755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.623941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.623975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.627177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.627227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.627391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.627427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.627566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.627601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.627776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.627810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.627976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.628048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.628199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.457 [2024-07-26 06:28:07.628233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.457 qpair failed and we were unable to recover it. 00:35:56.457 [2024-07-26 06:28:07.628360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.628393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.628558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.628591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.628738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.628771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.628934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.628968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.629129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.629163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.629325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.629359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.629519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.629552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.629718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.629751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.629910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.629943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.630087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.630121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.630261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.630295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.630436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.630469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.630613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.630647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.630852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.630900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.631066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.631104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.631259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.631294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.631443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.631478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.631616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.631650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.631820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.631855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.632005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.632053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.632250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.632287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.632423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.632457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.632586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.632619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.632784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.632818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.632945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.632983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.633133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.633166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.633305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.633339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.633515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.633548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.633676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.633713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.633874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.633911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.634075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.634109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.634248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.634282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.634441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.634474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.634603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.634636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.634777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.634811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.634948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.634982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.635143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.635176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.458 [2024-07-26 06:28:07.635328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.458 [2024-07-26 06:28:07.635362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.458 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.635534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.635568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.635712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.635750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.635929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.635962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.636153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.636188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.636333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.636367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.636527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.636561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.636695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.636728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.636898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.636934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.637072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.637105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.637265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.637298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.637439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.637471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.637638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.637671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.637799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.637832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.637980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.638016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.638178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.638211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.638373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.638407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.638545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.638579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.638713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.638747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.638901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.638935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.639100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.639134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.639265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.639299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.639463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.639495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.639640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.639673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.639832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.639865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.639994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.640027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.640170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.640203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.640363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.640401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.640588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.640621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.640774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.640807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.640970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.641003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.641167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.641201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.641393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.641431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.641575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.641608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.641739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.641772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.641943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.641976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.642117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.642151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.642279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.459 [2024-07-26 06:28:07.642313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.459 qpair failed and we were unable to recover it. 00:35:56.459 [2024-07-26 06:28:07.642470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.642503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.642662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.642694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.642827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.642860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.642993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.643026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.643192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.643225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.643383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.643431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.643624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.643661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.643810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.643843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.644007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.644042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.644233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.644266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.644394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.644427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.644572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.644606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.644765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.644812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.644979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.645015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.645202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.645236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.645399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.645432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.645583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.645618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.645749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.645782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.645964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.645998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.646171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.646205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.646342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.646375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.646504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.646537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.646676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.646711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.646871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.646904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.647084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.647121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.647276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.647322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.647471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.647505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.647648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.647681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.647829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.647862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.647987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.648025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.648175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.648209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.648346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.648379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.648519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.648551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.648704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.648737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.648864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.648896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.649043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.649083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.649218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.460 [2024-07-26 06:28:07.649250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.460 qpair failed and we were unable to recover it. 00:35:56.460 [2024-07-26 06:28:07.649376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.649408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.649570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.649604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.649735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.649767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.649904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.649937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.650105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.650141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.650297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.650330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.650514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.650547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.650692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.650728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.650916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.650949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.651117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.651151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.651315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.651348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.651513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.651548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.651707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.651740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.651887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.651919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.652067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.652100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.652262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.652294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.652454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.652486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.652619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.652652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.652780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.652813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.652963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.652999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.653146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.653181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.653364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.653410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.653544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.653579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.653709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.653742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.653892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.653928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.654103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.654137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.654271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.654317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.654447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.654480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.654645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.654678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.654813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.654846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.655022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.655056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.655215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.655262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.461 [2024-07-26 06:28:07.655408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.461 [2024-07-26 06:28:07.655443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.461 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.655635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.655671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.655818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.655851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.655993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.656026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.656197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.656234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.656367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.656400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.656536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.656575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.656743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.656776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.656902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.656935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.657086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.657121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.657268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.657301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.657439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.657472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.657626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.657659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.657789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.657821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.657968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.658000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.658134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.658168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.658347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.658396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.658567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.658603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.658752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.658786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.658956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.658992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.659193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.659227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.659358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.659390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.659539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.659571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.659703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.659735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.659888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.659921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.660091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.660124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.660254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.660286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.660420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.660458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.660616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.660648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.660782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.660815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.660971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.661003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.661151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.661185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.661319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.661355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.661497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.661530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.661657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.661690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.661853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.661886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.662017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.662051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.662189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.462 [2024-07-26 06:28:07.662222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.462 qpair failed and we were unable to recover it. 00:35:56.462 [2024-07-26 06:28:07.662359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.662392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.662557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.662590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.662728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.662761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.662919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.662953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.663093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.663127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.663269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.663301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.663438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.663473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.663605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.663638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.663766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.663799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.663956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.663989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.664138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.664171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.664318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.664364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.664503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.664538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.664678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.664712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.664845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.664877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.665028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.665066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.665206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.665250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.665393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.665425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.665582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.665614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.665744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.665776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.665972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.666004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.666172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.666211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.666357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.666391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.666547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.666581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.666754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.666788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.666925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.666959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.667105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.667160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.667300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.667335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.667482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.667516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.667681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.667721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.667893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.667925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.668056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.668096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.668237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.668269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.668421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.668453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.668613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.668645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.668771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.668802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.668932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.668964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.463 [2024-07-26 06:28:07.669096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.463 [2024-07-26 06:28:07.669131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.463 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.669310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.669357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.669517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.669553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.669697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.669733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.669871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.669904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.670036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.670079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.670236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.670283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.670452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.670487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.670626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.670660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.670842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.670874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.671005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.671037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.671200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.671233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.671353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.671386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.671545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.671578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.671713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.671745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.671874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.671906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.672045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.672086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.672342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.672379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.672545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.672579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.672729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.672762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.672899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.672932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.673075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.673108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.673246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.673280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.673427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.673461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.673607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.673642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.673805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.673837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.673989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.674036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.674179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.674214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.674368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.674403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.674537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.674570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.674755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.674788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.674939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.674973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.675113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.675161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.675297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.675331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.675484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.675518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.675656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.675688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.675830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.675862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.675992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.464 [2024-07-26 06:28:07.676024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.464 qpair failed and we were unable to recover it. 00:35:56.464 [2024-07-26 06:28:07.676154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.676188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.676314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.676346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.676475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.676508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.676640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.676673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.676829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.676862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.676999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.677031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.677172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.677204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.677352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.677399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.677548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.677583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.677717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.677751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.677898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.677933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.678066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.678100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.678250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.678297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.678471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.678507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.678646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.678681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.678838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.678872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.679019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.679052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.679199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.679231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.679391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.679424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.679550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.679582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.679744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.679779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.679945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.679979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.680136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.680177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.680318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.680352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.680511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.680544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.680676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.680709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.680850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.680883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.681019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.681053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.681212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.681247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.681391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.681423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.681581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.681613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.681762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.681794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.681969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.682002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.682147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.682180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.682313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.682349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.682486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.682518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.682644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.682676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.682803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.465 [2024-07-26 06:28:07.682835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.465 qpair failed and we were unable to recover it. 00:35:56.465 [2024-07-26 06:28:07.682968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.683000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.683143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.683176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.683337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.683369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.683514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.683546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.683675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.683707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.683843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.683875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.684010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.684045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.684179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.684213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.684338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.684371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.684505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.684539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.684685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.684719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.684879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.684911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.685042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.685081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.685271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.685304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.685447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.685479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.685609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.685640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.685774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.685806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.685987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.686019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.686158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.686193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.686330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.686364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.686491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.686524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.686706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.686754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.686924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.686957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.687102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.687146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.687302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.687333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.687486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.687517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.687679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.687713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.687843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.687875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.688007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.688039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.688198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.688235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.688375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.688409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.688542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.688575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.466 [2024-07-26 06:28:07.688733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.466 [2024-07-26 06:28:07.688766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.466 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.688897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.688930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.689118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.689152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.689298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.689332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.689477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.689515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.689674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.689708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.689882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.689915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.690050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.690088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.690218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.690251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.690407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.690440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.690570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.690603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.690760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.690793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.690948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.690981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.691145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.691179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.691307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.691339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.691464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.691497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.691656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.691689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.691818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.691851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.692017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.692050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.692190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.692223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.692366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.692412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.692550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.692585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.692731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.692764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.692953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.692986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.693123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.693156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.693302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.693335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.693501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.693533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.693671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.693704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.693840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.693875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.694005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.694038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.694192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.694226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.694381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.694425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.694554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.694587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.694714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.694747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.694908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.694941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.695080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.695114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.695249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.695283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.695477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.467 [2024-07-26 06:28:07.695510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.467 qpair failed and we were unable to recover it. 00:35:56.467 [2024-07-26 06:28:07.695646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.695678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.695810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.695843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.695988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.696021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.696160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.696193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.696329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.696363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.696490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.696522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.696691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.696729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.696860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.696892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.697025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.697057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.697240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.697272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.697398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.697431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.697593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.697626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.697804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.697836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.697970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.698005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.698160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.698194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.698328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.698361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.698550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.698584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.698724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.698757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.698892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.698925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.699090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.699124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.699297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.699330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.699521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.699555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.699709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.699744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.699900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.699945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.700082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.700116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.700264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.700297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.700430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.700462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.700590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.700623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.700781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.700813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.700933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.700966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.701100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.701133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.701290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.701323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.701484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.701516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.701658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.701690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.701825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.701857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.701983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.702015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.702167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.702200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.468 qpair failed and we were unable to recover it. 00:35:56.468 [2024-07-26 06:28:07.702337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.468 [2024-07-26 06:28:07.702371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.702505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.702538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.702721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.702753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.702894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.702927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.703083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.703116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.703255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.703288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.703415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.703448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.703614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.703648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.703780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.703814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.703985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.704023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.704167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.704200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.704355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.704388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.704525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.704557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.704711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.704743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.704880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.704914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.705047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.705085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.705211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.705244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.705447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.705495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.705637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.705671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.705814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.705847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.706002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.706036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.706182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.706216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.706341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.706374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.706524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.706557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.706689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.706722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.706850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.706883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.707066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.707101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.707264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.707297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.707460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.707493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.707622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.707655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.707812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.707845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.708007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.708040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.708211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.708243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.708372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.708407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.708570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.708603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.708737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.708770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.708907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.708940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.709106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.469 [2024-07-26 06:28:07.709139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.469 qpair failed and we were unable to recover it. 00:35:56.469 [2024-07-26 06:28:07.709263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.709295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.709459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.709491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.709625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.709657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.709790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.709824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.709966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.709999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.710162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.710195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.710323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.710356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.710513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.710546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.710676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.710709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.710843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.710886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.711042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.711088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.711220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.711259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.711394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.711427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.711582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.711615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.711747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.711779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.711919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.711953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.712112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.712145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.712282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.712315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.712489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.712522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.712649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.712681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.712835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.712868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.713004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.713037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.713207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.713240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.713380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.713413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.713571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.713604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.713743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.713776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.713915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.713947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.714077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.714111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.714247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.714280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.714412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.714444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.714597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.714629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.714760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.714793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.714926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.714960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.715147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.715182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.715310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.715355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.715488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.715521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.715645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.470 [2024-07-26 06:28:07.715677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.470 qpair failed and we were unable to recover it. 00:35:56.470 [2024-07-26 06:28:07.715827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.715859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.716002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.716035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.716182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.716217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.716382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.716416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.716552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.716585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.716714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.716747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.716979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.717012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.717154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.717187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.717346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.717379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.717521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.717554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.717689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.717721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.717888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.717921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.718081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.718114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.718260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.718292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.718425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.718463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.718628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.718661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.718801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.718834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.718987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.719019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.719171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.719218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.719385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.719420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.719591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.719625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.719756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.719789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.719974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.720007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.720154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.720187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.720324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.720357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.720489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.720523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.720681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.720713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.720847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.720881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.721046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.721086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.721244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.721277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.721435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.721469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.721632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.721665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.471 [2024-07-26 06:28:07.721828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.471 [2024-07-26 06:28:07.721861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.471 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.721990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.722022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.722171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.722205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.722338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.722381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.722543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.722575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.722705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.722738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.722877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.722909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.723037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.723083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.723258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.723291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.723445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.723478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.723618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.723650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.723784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.723817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.723951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.723985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.724136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.724170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.724301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.724334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.724466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.724498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.724663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.724696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.724863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.724896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.725022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.725055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.725192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.725225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.725369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.725401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.725543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.725575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.725727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.725764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.725902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.725935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.726096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.726129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.726267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.726300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.726447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.726479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.726635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.726668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.726833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.726866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.726993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.727025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.727187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.727220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.727343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.727375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.727531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.727563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.727706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.727739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.727896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.727929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.728067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.728100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.728270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.728303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.472 [2024-07-26 06:28:07.728467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.472 [2024-07-26 06:28:07.728500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.472 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.728634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.728667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.728827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.728860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.729005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.729037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.729172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.729205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.729334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.729366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.729523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.729556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.729728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.729761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.729889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.729921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.730080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.730113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.730242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.730275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.730440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.730473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.730614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.730647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.730811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.730844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.730983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.731015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.731178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.731211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.731341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.731374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.731532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.731564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.731694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.731726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.731888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.731921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.732078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.732112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.732250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.732283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.732410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.732443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.732602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.732635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.732763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.732795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.732932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.732970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.733127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.733161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.733373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.733407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.733537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.733569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.733715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.733749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.733879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.733925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.734080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.734114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.734270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.734303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.734444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.734477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.734609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.734642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.734773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.734806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.734938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.734970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.735129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.473 [2024-07-26 06:28:07.735163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.473 qpair failed and we were unable to recover it. 00:35:56.473 [2024-07-26 06:28:07.735290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.474 [2024-07-26 06:28:07.735323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.474 qpair failed and we were unable to recover it. 00:35:56.474 [2024-07-26 06:28:07.735489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.474 [2024-07-26 06:28:07.735522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.474 qpair failed and we were unable to recover it. 00:35:56.474 [2024-07-26 06:28:07.735670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.474 [2024-07-26 06:28:07.735702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.474 qpair failed and we were unable to recover it. 00:35:56.474 [2024-07-26 06:28:07.735846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.474 [2024-07-26 06:28:07.735880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.474 qpair failed and we were unable to recover it. 00:35:56.474 [2024-07-26 06:28:07.736014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.474 [2024-07-26 06:28:07.736047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.474 qpair failed and we were unable to recover it. 00:35:56.474 [2024-07-26 06:28:07.736188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.474 [2024-07-26 06:28:07.736220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.474 qpair failed and we were unable to recover it. 00:35:56.474 [2024-07-26 06:28:07.736351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.474 [2024-07-26 06:28:07.736383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.474 qpair failed and we were unable to recover it. 00:35:56.474 [2024-07-26 06:28:07.736523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.474 [2024-07-26 06:28:07.736556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.474 qpair failed and we were unable to recover it. 00:35:56.474 [2024-07-26 06:28:07.736686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.474 [2024-07-26 06:28:07.736719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.474 qpair failed and we were unable to recover it. 00:35:56.474 [2024-07-26 06:28:07.736890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.474 [2024-07-26 06:28:07.736923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.474 qpair failed and we were unable to recover it. 00:35:56.474 [2024-07-26 06:28:07.737055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.474 [2024-07-26 06:28:07.737093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.474 qpair failed and we were unable to recover it. 00:35:56.474 [2024-07-26 06:28:07.737231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.474 [2024-07-26 06:28:07.737264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.474 qpair failed and we were unable to recover it. 00:35:56.474 [2024-07-26 06:28:07.737403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.474 [2024-07-26 06:28:07.737436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.474 qpair failed and we were unable to recover it. 00:35:56.474 [2024-07-26 06:28:07.737567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.474 [2024-07-26 06:28:07.737600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.474 qpair failed and we were unable to recover it. 00:35:56.474 [2024-07-26 06:28:07.737746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.474 [2024-07-26 06:28:07.737779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.474 qpair failed and we were unable to recover it. 00:35:56.474 [2024-07-26 06:28:07.737920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.474 [2024-07-26 06:28:07.737953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.474 qpair failed and we were unable to recover it. 00:35:56.746 [2024-07-26 06:28:07.738093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.746 [2024-07-26 06:28:07.738128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.746 qpair failed and we were unable to recover it. 00:35:56.746 [2024-07-26 06:28:07.738334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.746 [2024-07-26 06:28:07.738368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.746 qpair failed and we were unable to recover it. 00:35:56.746 [2024-07-26 06:28:07.738509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.746 [2024-07-26 06:28:07.738544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.746 qpair failed and we were unable to recover it. 00:35:56.746 [2024-07-26 06:28:07.738685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.746 [2024-07-26 06:28:07.738718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.746 qpair failed and we were unable to recover it. 00:35:56.746 [2024-07-26 06:28:07.738877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.746 [2024-07-26 06:28:07.738909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.746 qpair failed and we were unable to recover it. 00:35:56.746 [2024-07-26 06:28:07.739081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.746 [2024-07-26 06:28:07.739114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.746 qpair failed and we were unable to recover it. 00:35:56.746 [2024-07-26 06:28:07.739243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.746 [2024-07-26 06:28:07.739277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.746 qpair failed and we were unable to recover it. 00:35:56.746 [2024-07-26 06:28:07.739407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.746 [2024-07-26 06:28:07.739440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.746 qpair failed and we were unable to recover it. 00:35:56.746 [2024-07-26 06:28:07.739570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.746 [2024-07-26 06:28:07.739602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.746 qpair failed and we were unable to recover it. 00:35:56.746 [2024-07-26 06:28:07.739736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.746 [2024-07-26 06:28:07.739769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.746 qpair failed and we were unable to recover it. 00:35:56.746 [2024-07-26 06:28:07.739907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.746 [2024-07-26 06:28:07.739941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.746 qpair failed and we were unable to recover it. 00:35:56.746 [2024-07-26 06:28:07.740099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.746 [2024-07-26 06:28:07.740138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.746 qpair failed and we were unable to recover it. 00:35:56.746 [2024-07-26 06:28:07.740302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.746 [2024-07-26 06:28:07.740335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.746 qpair failed and we were unable to recover it. 00:35:56.746 [2024-07-26 06:28:07.740468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.746 [2024-07-26 06:28:07.740501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.746 qpair failed and we were unable to recover it. 00:35:56.746 [2024-07-26 06:28:07.740631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.746 [2024-07-26 06:28:07.740663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.746 qpair failed and we were unable to recover it. 00:35:56.746 [2024-07-26 06:28:07.740825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.746 [2024-07-26 06:28:07.740858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.746 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.741014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.741047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.741197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.741230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.741357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.741390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.741526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.741559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.741721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.741753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.741922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.741956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.742103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.742136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.742267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.742301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.742432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.742466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.742605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.742638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.742766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.742799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.742960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.742993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.743153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.743186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.743327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.743359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.743500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.743532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.743660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.743692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.743825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.743858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.744018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.744051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.744282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.744314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.744451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.744485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.744619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.744652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.744813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.744846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.745018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.745051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.745204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.745238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.745375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.745420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.745584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.745617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.745747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.745779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.745909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.745942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.746100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.746134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.746335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.746368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.746554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.746587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.746711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.746743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.746900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.746932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.747092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.747125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.747262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.747295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.747 [2024-07-26 06:28:07.747433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.747 [2024-07-26 06:28:07.747467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.747 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.747610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.747642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.747770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.747803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.747962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.747994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.748125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.748159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.748323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.748355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.748485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.748518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.748646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.748678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.748818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.748850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.748993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.749026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.749191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.749226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.749362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.749394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.749534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.749573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.749722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.749757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.749934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.749967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.750104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.750139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.750300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.750333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.750463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.750496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.750655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.750687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.750873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.750905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.751038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.751077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.751204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.751237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.751381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.751413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.751567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.751600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.751745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.751778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.751914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.751947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.752085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.752119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.752255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.752292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.752421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.752453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.752613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.752646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.752818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.752851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.752986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.753019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.753159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.753191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.753322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.753355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.753515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.753548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.753696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.753729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.753899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.753932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.754082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.748 [2024-07-26 06:28:07.754116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.748 qpair failed and we were unable to recover it. 00:35:56.748 [2024-07-26 06:28:07.754255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.754288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.754441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.754474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.754600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.754633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.754766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.754799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.754939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.754972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.755126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.755159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.755293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.755326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.755457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.755490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.755625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.755657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.755841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.755873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.755998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.756031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.756172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.756205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.756331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.756364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.756513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.756546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.756701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.756734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.756894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.756937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.757096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.757129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.757274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.757306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.757446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.757478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.757634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.757666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.757803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.757835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.757993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.758025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.758164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.758198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.758328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.758362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.758493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.758525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.758654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.758687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.758830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.758863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.759048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.759090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.759229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.759262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.759390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.759426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.759580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.759613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.759779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.759812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.759972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.760005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.760167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.760201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.760363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.760396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.760533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.760566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.760741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.749 [2024-07-26 06:28:07.760777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.749 qpair failed and we were unable to recover it. 00:35:56.749 [2024-07-26 06:28:07.760930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.760972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.761113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.761146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.761307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.761340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.761500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.761533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.761674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.761706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.761842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.761876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.762012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.762045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.762226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.762259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.762424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.762457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.762592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.762625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.762759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.762793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.762952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.762985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.763145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.763179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.763310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.763342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.763492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.763525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.763683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.763716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.763844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.763877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.764057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.764106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.764236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.764269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.764429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.764461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.764593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.764627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.764755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.764788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.764918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.764952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.765116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.765150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.765284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.765317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.765450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.765483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.765615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.765648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.765779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.765812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.765949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.765981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.766104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.766137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.766291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.766323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.766455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.766488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.766646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.766683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.766854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.766886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.767018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.767051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.767196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.767229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.750 [2024-07-26 06:28:07.767359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.750 [2024-07-26 06:28:07.767391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.750 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.767548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.767580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.767720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.767752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.767879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.767911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.768037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.768075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.768261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.768296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.768424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.768467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.768654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.768687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.768844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.768876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.769035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.769081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.769217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.769250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.769418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.769450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.769602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.769634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.769796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.769830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.769970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.770003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.770174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.770208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.770372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.770404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.770558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.770590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.770720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.770753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.770909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.770942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.771098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.771150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.771275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.771308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.771460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.771493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.771686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.771718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.771864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.771897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.772025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.772064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.772227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.772260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.772412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.772445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.772590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.772622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.772786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.772819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.772976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.773009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-07-26 06:28:07.773152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.751 [2024-07-26 06:28:07.773185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.773321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.773353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.773482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.773515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.773668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.773700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.773864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.773897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.774055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.774096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.774256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.774288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.774448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.774481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.774637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.774670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.774826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.774860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.774995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.775028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.775164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.775196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.775326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.775358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.775518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.775551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.775701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.775733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.775874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.775906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.776042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.776089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.776257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.776289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.776431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.776464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.776598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.776631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.776790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.776823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.776948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.776980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.777111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.777145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.777270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.777302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.777461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.777493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.777646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.777678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.777833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.777865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.777998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.778030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.778170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.778202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.778356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.778388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.778512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.778544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.778707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.778739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.778912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.778945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.779077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.779111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.779247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.779279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.779408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.779441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.779603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.752 [2024-07-26 06:28:07.779636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-07-26 06:28:07.779762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.779794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.779957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.779992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.780124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.780169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.780302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.780335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.780471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.780504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.780662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.780694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.780824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.780857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.781014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.781046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.781188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.781224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.781380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.781412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.781546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.781578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.781773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.781806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.781941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.781973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.782112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.782146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.782272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.782304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.782437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.782470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.782628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.782660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.782824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.782856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.783017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.783050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.783185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.783218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.783356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.783389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.783525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.783558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.783705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.783738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.783899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.783931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.784083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.784116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.784242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.784275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.784401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.784433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.784592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.784624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.784807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.784839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.784966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.784998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.785141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.785175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.785312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.785345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.785469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.785501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.785639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.785671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.785800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.785832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.785979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.786011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.786155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.786188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.753 qpair failed and we were unable to recover it. 00:35:56.753 [2024-07-26 06:28:07.786315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.753 [2024-07-26 06:28:07.786347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.786473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.786505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.786636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.786669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.786821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.786853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.786983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.787015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.787180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.787213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.787354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.787388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.787552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.787585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.787708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.787741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.787893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.787925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.788088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.788121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.788261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.788298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.788427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.788461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.788600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.788633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.788765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.788798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.788923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.788955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.789122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.789154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.789289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.789321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.789476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.789508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.789635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.789667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.789819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.789851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.790007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.790040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.790196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.790242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.790417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.790453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.790599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.790634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.790776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.790809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.790966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.790999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.791165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.791199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.791333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.791368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.791498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.791530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.791708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.791741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.791889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.791923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.792055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.792097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.792229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.792262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.792393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.792427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.792565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.792598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.792782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.754 [2024-07-26 06:28:07.792817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.754 qpair failed and we were unable to recover it. 00:35:56.754 [2024-07-26 06:28:07.792979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.793032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.793196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.793231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.793358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.793391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.793524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.793556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.793680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.793713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.793845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.793878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.794011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.794043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.794183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.794216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.794356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.794399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.794556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.794588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.794744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.794776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.794937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.794969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.795108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.795141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.795298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.795330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.795457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.795493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.795658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.795690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.795847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.795879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.796034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.796074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.796217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.796249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.796381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.796413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.796539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.796571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.796694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.796726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.796863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.796896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.797067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.797100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.797234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.797267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.797392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.797424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.797552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.797584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.797743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.797776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.797959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.797991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.798174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.798207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.798340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.798373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.798549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.798582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.798718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.798750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.755 qpair failed and we were unable to recover it. 00:35:56.755 [2024-07-26 06:28:07.798879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.755 [2024-07-26 06:28:07.798912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.799078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.799111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.799246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.799279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.799401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.799433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.799552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.799584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.799750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.799782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.799942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.799974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.800162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.800195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.800340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.800393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.800545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.800581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.800743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.800792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.800944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.800981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.801118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.801151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.801310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.801342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.801471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.801503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.801636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.801668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.801823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.801855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.801984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.802017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.802192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.802228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.802403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.802436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.802591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.802623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.802760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.802798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.802948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.802981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.803155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.803189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.803320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.803352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.803476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.803509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.803640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.803673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.803823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.803856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.803991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.804025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.804192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.804225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.804352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.804384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.804523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.804555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.804695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.804728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.804885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.804917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.805054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.805094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.805341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.805374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.756 qpair failed and we were unable to recover it. 00:35:56.756 [2024-07-26 06:28:07.805509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.756 [2024-07-26 06:28:07.805540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.805672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.805704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.805863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.805895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.806029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.806067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.806232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.806263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.806399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.806431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.806551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.806583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.806772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.806804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.806940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.806972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.807117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.807151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.807276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.807308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.807467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.807499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.807668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.807701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.807838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.807871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.807995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.808027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.808192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.808224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.808358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.808391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.808514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.808547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.808708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.808740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.808871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.808904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.809069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.809102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.809239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.809272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.809405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.809437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.809579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.809610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.809738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.809770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.809897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.809933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.810082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.810114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.810286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.810318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.810464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.810496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.810631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.810665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.810789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.810821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.810957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.810989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.811130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.811163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.811297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.811330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.811471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.811503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.811640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.811672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.811830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.811862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.757 [2024-07-26 06:28:07.811989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.757 [2024-07-26 06:28:07.812034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.757 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.812175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.812208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.812349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.812381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.812553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.812585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.812709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.812741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.812880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.812913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.813044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.813090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.813216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.813248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.813373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.813405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.813592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.813624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.813755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.813787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.813939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.813971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.814126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.814158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.814282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.814314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.814478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.814511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.814682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.814715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.814890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.814923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.815051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.815090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.815237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.815269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.815408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.815441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.815576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.815607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.815766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.815798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.815923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.815955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.816121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.816154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.816293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.816326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.816458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.816491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.816623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.816655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.816825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.816858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.816988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.817025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.817156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.817189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.817329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.817361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.817492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.817524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.817649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.817682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.817845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.817877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.818045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.818083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.818229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.818262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.818385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.818418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.818575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.758 [2024-07-26 06:28:07.818607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.758 qpair failed and we were unable to recover it. 00:35:56.758 [2024-07-26 06:28:07.818780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.818813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.818960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.818993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.819141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.819173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.819300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.819332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.819499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.819537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.819731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.819763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.819895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.819927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.820087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.820121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.820254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.820287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.820410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.820443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.820579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.820611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.820739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.820771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.820906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.820938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.821099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.821132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.821263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.821295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.821425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.821456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.821620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.821652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.821820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.821853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.821980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.822012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.822151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.822182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.822348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.822380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.822536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.822568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.822708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.822747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.822908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.822940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.823090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.823123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.823256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.823287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.823448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.823515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.823677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.823708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.823870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.823902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.824030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.824067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.824204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.824240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.824371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.824403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.824568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.824600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.824727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.824758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.824896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.824928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.825071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.825104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.825239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.825270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.759 [2024-07-26 06:28:07.825427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.759 [2024-07-26 06:28:07.825459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.759 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.825615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.825647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.825778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.825810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.825942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.825973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.826141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.826173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.826310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.826342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.826466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.826497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.826663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.826696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.826840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.826872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.827014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.827046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.827191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.827223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.827351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.827382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.827537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.827568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.827727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.827758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.827889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.827921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.828051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.828096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.828251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.828282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.828422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.828454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.828586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.828618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.828773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.828805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.828956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.828987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.829153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.829186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.829318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.829351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.829474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.829505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.829665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.829697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.829828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.829860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.830019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.830050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.830220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.830251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.830388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.830422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.830578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.830610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.830745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.830778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.830948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.830980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.760 [2024-07-26 06:28:07.831116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.760 [2024-07-26 06:28:07.831148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.760 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.831284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.831319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.831501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.831532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.831663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.831695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.831824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.831857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.832020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.832052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.832214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.832246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.832437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.832469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.832626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.832658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.832805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.832837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.832962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.832993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.833150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.833182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.833322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.833353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.833510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.833541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.833675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.833707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.833896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.833928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.834055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.834101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.834237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.834269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.834429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.834462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.834615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.834647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.834786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.834817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.834961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.835003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.835141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.835173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.835329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.835360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.835491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.835523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.835658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.835690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.835825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.835856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.836041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.836079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.836214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.836245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.836381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.836414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.836560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.836591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.836775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.836806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.836943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.836976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.837135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.837168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.837296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.837327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.837476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.837508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.837645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.837677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.837842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.761 [2024-07-26 06:28:07.837873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.761 qpair failed and we were unable to recover it. 00:35:56.761 [2024-07-26 06:28:07.838006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.838036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.838180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.838212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.838344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.838374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.838508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.838540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.838670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.838701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.838860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.838891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.839040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.839078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.839220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.839252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.839400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.839431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.839594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.839625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.839756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.839788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.839943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.839975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.840136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.840168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.840328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.840360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.840521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.840553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.840713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.840745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.840899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.840930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.841075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.841108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.841229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.841260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.841389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.841421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.841560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.841593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.841757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.841788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.841932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.841964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.842102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.842135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.842293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.842325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.842456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.842493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.842623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.842655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.842783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.842814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.842953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.842984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.843120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.843152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.843334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.843370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.843508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.843540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.843670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.843703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.843831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.843862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.844027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.844070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.844216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.844248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.844429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.844461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.762 qpair failed and we were unable to recover it. 00:35:56.762 [2024-07-26 06:28:07.844620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.762 [2024-07-26 06:28:07.844652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.844788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.844820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.844980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.845013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.845146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.845180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.845320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.845352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.845503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.845535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.845670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.845703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.845872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.845903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.846049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.846088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.846223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.846255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.846385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.846426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.846586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.846619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.846764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.846796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.846978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.847011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.847148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.847179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.847328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.847360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.847501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.847533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.847660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.847692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.847885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.847916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.848045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.848097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.848242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.848274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.848456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.848487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.848639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.848671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.848804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.848837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.848990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.849021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.849187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.849219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.849354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.849385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.849528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.849559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.849729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.849761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.849899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.849930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.850075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.850107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.850240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.850272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.850407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.850437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.850596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.850632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.850787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.850818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.850976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.851008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.851145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.851177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.851330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.763 [2024-07-26 06:28:07.851361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.763 qpair failed and we were unable to recover it. 00:35:56.763 [2024-07-26 06:28:07.851509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.851540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.851680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.851712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.851871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.851904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.852034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.852069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.852207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.852239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.852364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.852396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.852541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.852572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.852708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.852739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.852863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.852895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.853030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.853066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.853225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.853257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.853403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.853436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.853592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.853625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.853751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.853783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.853945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.853977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.854133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.854165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.854293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.854324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.854465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.854496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.854656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.854687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.854847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.854878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.855016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.855047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.855191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.855222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.855417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.855448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.855590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.855622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.855780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.855812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.855940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.855971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.856137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.856169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.856311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.856343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.856472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.856503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.856647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.856679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.856841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.856873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.857027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.857064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.857232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.857263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.857424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.857455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.857588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.857620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.857761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.857796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.857928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.857969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.764 [2024-07-26 06:28:07.858103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.764 [2024-07-26 06:28:07.858136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.764 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.858273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.858304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.858447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.858478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.858612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.858643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.858777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.858810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.859028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.859091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.859274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.859323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.859507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.859544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.859706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.859740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.859877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.859911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.860098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.860133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.860292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.860326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.860490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.860523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.860666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.860699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.860875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.860915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.861090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.861126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.861274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.861309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.861458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.861492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.861623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.861656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.861803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.861836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.861998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.862031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.862185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.862233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.862382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.862418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.862550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.862584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.862729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.862762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.862906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.862940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.863103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.863137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.863275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.863308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.863518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.863568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.863719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.863755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.863919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.863952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.864100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.864135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.765 [2024-07-26 06:28:07.864271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.765 [2024-07-26 06:28:07.864304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.765 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.864484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.864517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.864657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.864690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.864835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.864869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.865038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.865081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.865242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.865276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.865414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.865451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.865611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.865645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.865781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.865817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.865975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.866008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.866151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.866184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.866350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.866384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.866551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.866585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.866747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.866780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.866928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.866976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.867129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.867165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.867307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.867344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.867474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.867506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.867669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.867702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.867844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.867877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.868029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.868071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.868261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.868294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.868428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.868461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.868601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.868634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.868771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.868804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.868979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.869027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.869218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.869254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.869402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.869435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.869577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.869610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.869779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.869812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.869951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.869985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.870135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.870169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.870299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.870332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.870500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.870535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.870676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.870709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.870871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.870904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.871042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.871085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.871247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.766 [2024-07-26 06:28:07.871280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.766 qpair failed and we were unable to recover it. 00:35:56.766 [2024-07-26 06:28:07.871404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.871437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.871576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.871611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.871745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.871778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.871946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.871981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.872114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.872156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.872310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.872344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.872513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.872546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.872688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.872721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.872850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.872887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.873025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.873063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.873225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.873273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.873413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.873448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.873587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.873620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.873784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.873817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.873954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.873987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.874124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.874159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.874300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.874334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.874491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.874525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.874684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.874717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.874853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.874886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.875067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.875135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.875284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.875320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.875498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.875532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.875667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.875700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.875829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.875862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.875992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.876024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.876177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.876211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.876361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.876409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.876548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.876583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.876722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.876753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.876910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.876947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.877088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.877122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.877275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.877308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.877444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.877478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.877613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.877646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.877789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.877823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.877986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.878019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.767 qpair failed and we were unable to recover it. 00:35:56.767 [2024-07-26 06:28:07.878184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.767 [2024-07-26 06:28:07.878217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.878347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.878381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.878564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.878596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.878761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.878808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.878955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.879004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.879152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.879188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.879340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.879375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.879562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.879595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.879755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.879789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.879918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.879951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.880087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.880121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.880257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.880298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.880435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.880468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.880610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.880643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.880792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.880828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.880982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.881015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.881169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.881204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.881353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.881388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.881535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.881568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.881709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.881742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.881900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.881932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.882064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.882097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.882228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.882260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.882389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.882422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.882583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.882615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.882754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.882800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.882941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.882975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.883120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.883153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.883290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.883324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.883472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.883506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.883668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.883701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.883839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.883872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.884022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.884056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.884193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.884226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.884415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.884448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.884594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.884640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.884782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.884817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.884955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.768 [2024-07-26 06:28:07.884988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-07-26 06:28:07.885141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.885179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.885360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.885393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.885524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.885558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.885723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.885771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.885912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.885947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.886098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.886135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.886281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.886315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.886479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.886512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.886650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.886683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.886840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.886873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.887016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.887049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.887188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.887221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.887384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.887417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.887607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.887647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.887794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.887827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.887976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.888011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.888185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.888231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.888416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.888464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.888600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.888635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.888807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.888842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.889026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.889079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.889213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.889245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.889382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.889415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.889586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.889619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.889758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.889791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.889959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.889992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.890139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.890172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.890321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.890354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.890513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.890546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.890685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.890718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.890879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.890911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.891042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.891081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-07-26 06:28:07.891224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.769 [2024-07-26 06:28:07.891256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.891400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.891432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.891584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.891616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.891754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.891787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.891948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.891997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.892155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.892191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.892335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.892369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.892513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.892550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.892715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.892749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.892877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.892909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.893039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.893079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.893218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.893251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.893393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.893426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.893560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.893594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.893753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.893785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.893914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.893947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.894116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.894150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.894301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.894334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.894474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.894506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.894670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.894703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.894847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.894879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.895044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.895093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.895232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.895265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.895410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.895443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.895574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.895607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.895771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.895804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.895938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.895971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.896107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.896140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.896268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.896300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.896428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.896460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.896591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.896623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.896753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.896785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.896945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.896977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.897137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.897170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.897307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.897341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.897509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.897542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.897670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.897703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-07-26 06:28:07.897837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.770 [2024-07-26 06:28:07.897871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.897998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.898030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.898177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.898210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.898395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.898443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.898608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.898644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.898819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.898854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.899024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.899075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.899213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.899247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.899390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.899423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.899582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.899614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.899752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.899784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.899946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.899981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.900148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.900183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.900319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.900353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.900520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.900553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.900688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.900721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.900848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.900881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.901040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.901079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.901249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.901287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.901453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.901487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.901618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.901652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.901822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.901856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.902018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.902052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.902197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.902230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.902425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.902463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.902622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.902655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.902819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.902852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.902992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.903026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.903168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.903202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.903344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.903377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.903516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.903549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.903688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.903721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.903877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.903910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.904044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.904086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.904247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.904281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.904457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.904491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.904652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.904685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.771 qpair failed and we were unable to recover it. 00:35:56.771 [2024-07-26 06:28:07.904821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.771 [2024-07-26 06:28:07.904856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.905035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.905077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.905217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.905251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.905381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.905414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.905605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.905638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.905768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.905802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.905942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.905975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.906110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.906144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.906281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.906315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.906458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.906491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.906678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.906711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.906864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.906912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.907074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.907109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.907258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.907295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.907456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.907503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.907654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.907690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.907823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.907857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.907989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.908022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.908165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.908198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.908339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.908373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.908508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.908542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.908711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.908744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.908877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.908911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.909044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.909086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.909248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.909281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.909454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.909501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.909675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.909711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.909840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.909878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.910040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.910079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.910223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.910257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.910394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.910428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.910574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.910607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.910763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.910798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.910987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.911021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.911207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.911241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.911381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.911414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.911552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.911585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.911755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.772 [2024-07-26 06:28:07.911788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.772 qpair failed and we were unable to recover it. 00:35:56.772 [2024-07-26 06:28:07.911914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.911947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.912101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.912134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.912265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.912300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.912450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.912486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.912647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.912680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.912841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.912876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.913039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.913082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.913220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.913253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.913431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.913478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.913617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.913651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.913810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.913842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.913983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.914019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.914164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.914197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.914326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.914361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.914519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.914552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.914687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.914721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.914856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.914891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.915064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.915098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.915242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.915276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.915444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.915477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.915622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.915656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.915800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.915834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.916008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.916042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.916198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.916244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.916393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.916429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.916588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.916621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.916783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.916817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.916988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.917021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.917164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.917198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.917397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.917435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.917598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.917643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.917787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.917820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.917987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.918020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.918186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.918220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.918352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.918385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.918563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.918596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.918724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.918759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.773 qpair failed and we were unable to recover it. 00:35:56.773 [2024-07-26 06:28:07.918887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.773 [2024-07-26 06:28:07.918921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.919044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.919087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.919226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.919259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.919389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.919421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.919585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.919619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.919778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.919812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.919993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.920027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.920193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.920227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.920388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.920421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.920572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.920606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.920764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.920797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.920932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.920966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.921149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.921196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.921334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.921369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.921510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.921545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.921686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.921720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.921857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.921889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.922050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.922093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.922227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.922260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.922454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.922510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.922665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.922700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.922877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.922910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.923052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.923092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.923230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.923263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.923421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.923454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.923610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.923643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.923886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.923920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.924089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.924123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.924263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.924296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.924429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.924463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.924621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.924654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.924814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.924847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.924978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.774 [2024-07-26 06:28:07.925018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.774 qpair failed and we were unable to recover it. 00:35:56.774 [2024-07-26 06:28:07.925165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.925199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.925363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.925397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.925547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.925583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.925741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.925774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.925904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.925937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.926085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.926119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.926281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.926325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.926453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.926486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.926640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.926673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.926808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.926841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.926975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.927008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.927144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.927177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.927326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.927358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.927532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.927581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.927721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.927755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.927919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.927954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.928115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.928150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.928295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.928328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.928499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.928546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.928715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.928750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.928885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.928919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.929067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.929102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.929237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.929278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.929435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.929484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.929641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.929674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.929799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.929831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.929998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.930034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.930183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.930216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.930353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.930385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.930518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.930551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.930680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.930713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.930848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.930881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.931035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.931075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.931216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.931248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.931386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.931419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.931549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.931582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.931719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.775 [2024-07-26 06:28:07.931751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.775 qpair failed and we were unable to recover it. 00:35:56.775 [2024-07-26 06:28:07.931898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.931930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.932099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.932133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.932278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.932312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.932445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.932478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.932620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.932652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.932823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.932856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.932994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.933026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.933171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.933205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.933342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.933375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.933503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.933536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.933674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.933706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.933844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.933877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.934047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.934091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.934234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.934268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.934433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.934467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.934600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.934633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.934883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.934916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.935082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.935116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.935250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.935284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.935448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.935481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.935616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.935649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.935779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.935812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.935951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.935984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.936121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.936155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.936293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.936327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.936484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.936516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.936661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.936693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.936847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.936879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.937019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.937051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.937190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.937228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.937391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.937424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.937561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.937593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.937751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.937784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.937938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.937971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.938145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.938178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.938341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.938373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.938503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.938534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.776 [2024-07-26 06:28:07.938661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.776 [2024-07-26 06:28:07.938693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.776 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.938825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.938858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.938991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.939024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.939166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.939199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.939339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.939372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.939530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.939562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.939704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.939737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.939872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.939905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.940039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.940079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.940215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.940248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.940384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.940417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.940555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.940589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.940744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.940776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.940906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.940938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.941098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.941131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.941266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.941298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.941438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.941470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.941629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.941676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.941831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.941867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.942015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.942049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.942215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.942262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.942401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.942436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.942576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.942609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.942769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.942802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.942933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.942966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.943101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.943133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.943264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.943296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.943453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.943486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.943617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.943649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.943778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.943810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.943945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.943978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.944115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.944154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.944288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.944325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.944457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.944490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.944622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.944653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.944818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.944865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.945016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.945053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.945220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.777 [2024-07-26 06:28:07.945267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.777 qpair failed and we were unable to recover it. 00:35:56.777 [2024-07-26 06:28:07.945437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.945472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.945601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.945635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.945787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.945835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.946094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.946132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.946297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.946332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.946468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.946501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.946690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.946723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.946861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.946894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.947065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.947099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.947230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.947263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.947446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.947479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.947612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.947646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.947776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.947810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.948003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.948041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.948185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.948230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.948378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.948411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.948569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.948602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.948728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.948760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.948916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.948949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.949088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.949121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.949264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.949296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.949467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.949500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.949643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.949676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.949813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.949845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.949986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.950019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.950163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.950196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.950349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.950397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.950550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.950591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.950737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.950772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.950937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.950971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.951099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.951133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.951275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.951310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.951478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.951512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.951651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.951684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.951816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.951855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.951990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.952024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.952201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.952233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.778 qpair failed and we were unable to recover it. 00:35:56.778 [2024-07-26 06:28:07.952378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.778 [2024-07-26 06:28:07.952410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.952571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.952604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.952743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.952775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.952929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.952960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.953094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.953128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.953262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.953293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.953421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.953452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.953580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.953612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.953746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.953777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.953919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.953954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.954124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.954158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.954300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.954333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.954501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.954535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.954716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.954750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.954882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.954915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.955050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.955091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.955235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.955269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.955400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.955434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.955592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.955625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.955772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.955805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.955943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.955977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.956140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.956175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.956335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.956382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.956534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.956569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.956727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.956775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.956928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.956964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.957098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.957133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.957269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.957303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.957438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.957472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.957635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.957669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.957831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.957864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.958031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.958073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.779 [2024-07-26 06:28:07.958217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.779 [2024-07-26 06:28:07.958250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.779 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.958412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.958447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.958589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.958622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.958787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.958822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.958962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.958999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.959152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.959197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.959363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.959410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.959553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.959589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.959737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.959772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.959923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.959956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.960092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.960127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.960291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.960324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.960493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.960526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.960661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.960695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.960826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.960860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.961028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.961067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.961209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.961243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.961400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.961447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.961628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.961666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.961836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.961883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.962043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.962084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.962329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.962362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.962600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.962633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.962768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.962803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.962967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.963001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.963148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.963185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.963351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.963385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.963556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.963589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.963726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.963759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.963895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.963928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.964082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.964117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.964281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.964315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.964457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.964490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.964629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.964663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.964817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.964850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.965017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.965051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.965198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.965232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.965390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.965424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.780 qpair failed and we were unable to recover it. 00:35:56.780 [2024-07-26 06:28:07.965626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.780 [2024-07-26 06:28:07.965672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.965840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.965875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.966012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.966046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.966199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.966231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.966371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.966404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.966533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.966566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.966727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.966758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.966901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.966942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.967085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.967120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.967279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.967313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.967439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.967473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.967604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.967637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.967785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.967820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.967976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.968010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.968178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.968213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.968388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.968422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.968620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.968656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.968816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.968851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.968983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.969028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.969194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.969228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.969363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.969397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.969542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.969575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.969738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.969772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.969933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.969967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.970110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.970146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.970281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.970313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.970441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.970473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.970635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.970668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.970823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.970855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.970984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.971029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.971188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.971236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.971383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.971420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.971558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.971591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.971732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.971765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.971907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.971941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.972111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.972145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.972290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.972324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.781 [2024-07-26 06:28:07.972462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.781 [2024-07-26 06:28:07.972495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.781 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.972629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.972663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.972802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.972836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.972976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.973010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.973151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.973185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.973319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.973352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.973494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.973527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.973668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.973701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.973850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.973883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.974010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.974043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.974191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.974230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.974405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.974438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.974576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.974610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.974735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.974768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.974904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.974937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.975069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.975103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.975240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.975273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.975411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.975444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.975571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.975604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.975742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.975775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.975903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.975937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.976083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.976117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.976274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.976307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.976474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.976507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.976669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.976701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.976834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.976868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.977039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.977078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.977216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.977249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.977392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.977426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.977584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.977618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.977767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.977806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.977973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.978021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.978178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.978215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.978360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.978395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.978530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.978563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.978718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.978751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.978884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.978917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.979078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.782 [2024-07-26 06:28:07.979136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.782 qpair failed and we were unable to recover it. 00:35:56.782 [2024-07-26 06:28:07.979283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.979319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.979490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.979523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.979659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.979692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.979875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.979907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.980066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.980100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.980232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.980264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.980421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.980455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.980611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.980644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.980776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.980810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.980965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.981012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.981162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.981198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.981350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.981388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.981570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.981623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.981794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.981829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.981963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.981996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.982156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.982189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.982324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.982357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.982508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.982542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.982698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.982744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.982871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.982903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.983035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.983074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.983214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.983247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.983373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.983406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.983563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.983595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.983730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.983763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.983916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.983949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.984085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.984118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.984272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.984319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.984464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.984501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.984660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.984693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.984832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.984866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.984999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.985032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.783 [2024-07-26 06:28:07.985176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.783 [2024-07-26 06:28:07.985212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.783 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.985378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.985426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.985575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.985609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.985774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.985806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.985989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.986021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.986161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.986193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.986354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.986386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.986527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.986563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.986704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.986738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.986877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.986910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.987050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.987090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.987233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.987265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.987426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.987459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.987614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.987648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.987815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.987848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.988088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.988121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.988266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.988300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.988538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.988572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.988713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.988748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.988906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.988940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.989106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.989144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.989271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.989304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.989463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.989495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.989629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.989662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.989835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.989868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.990013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.990069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.990217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.990254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.990405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.990452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.990629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.990676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.990824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.990860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.991000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.991034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.991203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.991235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.991365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.991398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.991533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.991566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.991733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.991767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.992013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.992046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.992218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.992251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.784 qpair failed and we were unable to recover it. 00:35:56.784 [2024-07-26 06:28:07.992386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.784 [2024-07-26 06:28:07.992419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.992581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.992614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.992783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.992816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.992948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.992982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.993160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.993200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.993346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.993382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.993548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.993582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.993746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.993780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.993918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.993951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.994174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.994209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.994366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.994413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.994564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.994599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.994735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.994767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.994906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.994939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.995074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.995108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.995245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.995277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.995418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.995454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.995588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.995621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.995868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.995901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.996042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.996084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.996216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.996250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.996391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.996425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.996559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.996591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.996750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.996787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.996928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.996961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.997104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.997138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.997272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.997305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.997433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.997466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.997594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.997627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.997764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.997797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.997972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.998004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.998153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.998187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.998314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.998347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.998508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.998541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.998671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.998704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.998865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.998897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.999027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.999068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.785 qpair failed and we were unable to recover it. 00:35:56.785 [2024-07-26 06:28:07.999208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.785 [2024-07-26 06:28:07.999242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:07.999410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:07.999444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:07.999608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:07.999641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:07.999782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:07.999816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:07.999974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.000019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.000166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.000205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.000349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.000383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.000545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.000579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.000720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.000755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.000900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.000933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.001195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.001243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.001383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.001417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.001551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.001584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.001725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.001761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.001899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.001933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.002095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.002129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.002265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.002299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.002437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.002471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.002639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.002672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.002832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.002867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.003008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.003041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.003196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.003229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.003369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.003404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.003535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.003570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.003816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.003850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:56.786 [2024-07-26 06:28:08.003980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.004013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:56.786 [2024-07-26 06:28:08.004197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.004230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:56.786 [2024-07-26 06:28:08.004382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.004415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.004551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.004585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.004711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.004744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.004908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.004951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.005081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.005122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.005260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.005294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.005460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.005494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.005638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.005671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.005852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.786 [2024-07-26 06:28:08.005900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.786 qpair failed and we were unable to recover it. 00:35:56.786 [2024-07-26 06:28:08.006043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.006084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.006221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.006253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.006398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.006436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.006603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.006637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.006807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.006841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.007003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.007036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.007178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.007213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.007344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.007378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.007520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.007556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.007714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.007748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.007883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.007916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.008069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.008114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.008269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.008302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.008458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.008506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.008680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.008714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.008891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.008927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.009073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.009119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.009262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.009295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.009430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.009463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.009622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.009655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.009782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.009815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.009949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.009983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.010136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.010184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.010335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.010383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.010532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.010568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.010745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.010782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.010944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.010978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.011130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.011164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.011309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.011347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.011514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.011548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.011711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.011747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.011906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.011939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.012106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.012140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.012291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.012327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.012489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.012535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.012684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.012718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.012859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.787 [2024-07-26 06:28:08.012894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.787 qpair failed and we were unable to recover it. 00:35:56.787 [2024-07-26 06:28:08.013086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.013124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.013287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.013321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.013459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.013494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.013633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.013667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.013835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.013870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.014032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.014072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.014209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.014242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.014407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.014441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.014604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.014638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.014787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.014823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.014985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.015018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.015163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.015196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.015328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.015360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.015495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.015527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.015712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.015744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.015880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.015912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.016046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.016085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.016225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.016258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.016433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.016481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.016643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.016678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.016829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.016865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.017001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.017035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.017184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.017219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.017356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.017389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.017540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.017573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.017739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.017771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.017901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.017933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.018070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.018102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.018237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.018269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.018474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.018506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.018667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.018699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.018837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.018876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.019034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.019071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.019215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.788 [2024-07-26 06:28:08.019247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.788 qpair failed and we were unable to recover it. 00:35:56.788 [2024-07-26 06:28:08.019431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.019464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.019601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.019633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.019770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.019801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.019963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.019995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.020125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.020158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.020297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.020328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.020487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.020519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.020680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.020712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.020853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.020884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.021014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.021046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.021202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.021235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.021410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.021447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.021615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.021649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.021804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.021839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.021971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.022005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.022173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.022207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.022371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.022405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.022572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.022607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.022766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.022831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.023001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.023035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.023207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.023241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.023384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.023416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.023550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.023583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.023755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.023787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.023972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.024030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.024190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.024227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.024396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.024430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.024566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.024600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.024772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.024808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.024973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.025007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.025151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.025186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.025316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.025349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.025523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.025557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.025708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.025741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.025876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.025909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.026076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.026110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.026252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.026287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.026430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.026469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.026616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.026650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.026809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.026842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.789 [2024-07-26 06:28:08.027001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.789 [2024-07-26 06:28:08.027035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.789 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.027176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.027210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.027368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.027416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.027558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.027594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.027770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.027808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.027942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.027975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.028113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.028147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.028281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.028315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.028441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.028475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.028616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.028649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.028838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.028874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.029040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.029081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:56.790 [2024-07-26 06:28:08.029229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.029264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:56.790 [2024-07-26 06:28:08.029403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.029443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.029605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.029638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.790 [2024-07-26 06:28:08.029803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.029838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.029973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.790 [2024-07-26 06:28:08.030008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.030192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.030227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.030368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.030403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.030544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.030577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.030708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.030741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.030880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.030913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.031112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.031150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.031304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.031351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.031517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.031552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.031697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.031729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.031866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.031901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.032043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.032082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.032222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.032255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.032396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.032430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.032595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.032629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.032770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.032806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.032969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.033002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.033151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.033186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.033329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.033364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.033513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.033553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.033690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.033724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.033892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.033925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.034090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.034131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.790 qpair failed and we were unable to recover it. 00:35:56.790 [2024-07-26 06:28:08.034293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.790 [2024-07-26 06:28:08.034327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.034492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.034525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.034686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.034720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.034854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.034888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.035046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.035089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.035230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.035263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.035421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.035455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.035645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.035678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.035842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.035878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.036016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.036048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.036207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.036239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.036371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.036404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.036539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.036571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.036697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.036730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.036862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.036894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.037056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.037102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.037255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.037289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.037432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.037464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.037601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.037634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.037762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.037794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.037927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.037961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.038203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.038238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.038398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.038431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.038570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.038604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.038778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.038811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.039006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.039039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.039193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.039227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.039389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.039422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.039611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.039643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.039799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.039846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.039989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.040025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.040183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.040217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.040370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.040414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.040591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.040625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.040787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.040820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.040988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.041021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.041188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.041227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.041370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.041404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.041617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.041651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.041815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.041864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.042010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.042044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.042236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.042269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.042407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.791 [2024-07-26 06:28:08.042454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.791 qpair failed and we were unable to recover it. 00:35:56.791 [2024-07-26 06:28:08.042610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.042642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.042803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.042835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.042967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.042999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.043147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.043180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.043318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.043351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.043481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.043512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.043669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.043701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.043842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.043874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.044024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.044057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.044237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.044269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.044419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.044451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.044612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.044644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.044775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.044806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.044932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.044964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.045105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.045141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.045304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.045336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.045484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.045516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.045649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.045682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.045811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.045843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.045986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.046017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.046216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.046270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.046456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.046504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.046714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.046754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.046895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.046940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.047081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.047125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.047295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.047329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.047509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.047544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.047691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.047724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.047855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.047889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.048036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.048084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.048281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.048315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.048477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.048511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.048654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.048690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.048858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.048894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.049030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.049069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.049231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.049264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.049423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.049457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.049593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.049627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.792 [2024-07-26 06:28:08.049811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.792 [2024-07-26 06:28:08.049844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.792 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.050003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.050037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.793 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.050195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.050232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.793 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.050397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.050430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.793 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.050590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.050625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.793 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.050833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.050868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.793 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.051030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.051083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.793 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.051252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.051286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.793 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.051447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.051480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.793 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.051648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.051681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.793 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.051854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.051890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.793 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.052033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.052079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.793 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.052223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.052256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.793 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.052419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.052452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.793 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.052633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.052666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.793 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.052834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.052871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.793 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.053030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.053069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.793 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.053231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.053265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.793 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.053409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.053442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.793 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.053603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.053636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.793 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.053793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.053842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.793 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.054015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.054050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.793 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.054202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.054240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.793 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.054387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.054418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.793 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.054607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.054639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.793 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.054782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.054815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.793 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.054964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.055002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.793 qpair failed and we were unable to recover it. 00:35:56.793 [2024-07-26 06:28:08.055176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.793 [2024-07-26 06:28:08.055210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.055338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.055372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.055541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.055574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.055731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.055764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.055927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.055960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.056127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.056161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.056294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.056327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.056469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.056502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.056685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.056734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.056911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.056947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.057083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.057126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.057288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.057321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.057487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.057520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.057672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.057705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.057853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.057886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.058028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.058067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.058224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.058257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.058423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.058456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.058586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.058620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.058781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.058814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.058984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.059020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.059164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.059198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.059345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.059379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.059508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.059541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.059676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.059709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.059865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.059897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.060037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.060083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.060249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.060297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.060458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.060493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.060619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.060652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.060841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.060874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.061048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.061095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.061272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.061306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.794 qpair failed and we were unable to recover it. 00:35:56.794 [2024-07-26 06:28:08.061467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.794 [2024-07-26 06:28:08.061499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.795 qpair failed and we were unable to recover it. 00:35:56.795 [2024-07-26 06:28:08.061639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.795 [2024-07-26 06:28:08.061672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.795 qpair failed and we were unable to recover it. 00:35:56.795 [2024-07-26 06:28:08.061811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.795 [2024-07-26 06:28:08.061860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.795 qpair failed and we were unable to recover it. 00:35:56.795 [2024-07-26 06:28:08.062010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.795 [2024-07-26 06:28:08.062043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.795 qpair failed and we were unable to recover it. 00:35:56.795 [2024-07-26 06:28:08.062210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.795 [2024-07-26 06:28:08.062243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.795 qpair failed and we were unable to recover it. 00:35:56.795 [2024-07-26 06:28:08.062374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.795 [2024-07-26 06:28:08.062407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.795 qpair failed and we were unable to recover it. 00:35:56.795 [2024-07-26 06:28:08.062547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.795 [2024-07-26 06:28:08.062580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.795 qpair failed and we were unable to recover it. 00:35:56.795 [2024-07-26 06:28:08.062713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.795 [2024-07-26 06:28:08.062747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.795 qpair failed and we were unable to recover it. 00:35:56.795 [2024-07-26 06:28:08.062906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.795 [2024-07-26 06:28:08.062938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.795 qpair failed and we were unable to recover it. 00:35:56.795 [2024-07-26 06:28:08.063075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.795 [2024-07-26 06:28:08.063109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.795 qpair failed and we were unable to recover it. 00:35:56.795 [2024-07-26 06:28:08.063294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.795 [2024-07-26 06:28:08.063327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.795 qpair failed and we were unable to recover it. 00:35:56.795 [2024-07-26 06:28:08.063473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.795 [2024-07-26 06:28:08.063506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.795 qpair failed and we were unable to recover it. 00:35:56.795 [2024-07-26 06:28:08.063685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.795 [2024-07-26 06:28:08.063718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.795 qpair failed and we were unable to recover it. 00:35:56.795 [2024-07-26 06:28:08.063848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.795 [2024-07-26 06:28:08.063882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:56.795 qpair failed and we were unable to recover it. 00:35:56.795 [2024-07-26 06:28:08.064036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.795 [2024-07-26 06:28:08.064079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.795 qpair failed and we were unable to recover it. 00:35:56.795 [2024-07-26 06:28:08.064225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.795 [2024-07-26 06:28:08.064259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.795 qpair failed and we were unable to recover it. 00:35:56.795 [2024-07-26 06:28:08.064389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.795 [2024-07-26 06:28:08.064422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.795 qpair failed and we were unable to recover it. 00:35:56.795 [2024-07-26 06:28:08.064579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.795 [2024-07-26 06:28:08.064612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.795 qpair failed and we were unable to recover it. 00:35:56.795 [2024-07-26 06:28:08.064744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.795 [2024-07-26 06:28:08.064778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.795 qpair failed and we were unable to recover it. 00:35:56.795 [2024-07-26 06:28:08.064929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.795 [2024-07-26 06:28:08.064962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:56.795 qpair failed and we were unable to recover it. 00:35:56.795 [2024-07-26 06:28:08.065155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.795 [2024-07-26 06:28:08.065203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.795 qpair failed and we were unable to recover it. 00:35:56.795 [2024-07-26 06:28:08.065344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.795 [2024-07-26 06:28:08.065386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:56.795 qpair failed and we were unable to recover it. 00:35:57.060 [2024-07-26 06:28:08.065524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.060 [2024-07-26 06:28:08.065561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.060 qpair failed and we were unable to recover it. 00:35:57.060 [2024-07-26 06:28:08.065691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.060 [2024-07-26 06:28:08.065724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.060 qpair failed and we were unable to recover it. 00:35:57.060 [2024-07-26 06:28:08.065859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.060 [2024-07-26 06:28:08.065893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.060 qpair failed and we were unable to recover it. 00:35:57.060 [2024-07-26 06:28:08.066032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.060 [2024-07-26 06:28:08.066071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.060 qpair failed and we were unable to recover it. 00:35:57.060 [2024-07-26 06:28:08.066244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.060 [2024-07-26 06:28:08.066277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.060 qpair failed and we were unable to recover it. 00:35:57.060 [2024-07-26 06:28:08.066429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.060 [2024-07-26 06:28:08.066462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.060 qpair failed and we were unable to recover it. 00:35:57.060 [2024-07-26 06:28:08.066601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.060 [2024-07-26 06:28:08.066640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.060 qpair failed and we were unable to recover it. 00:35:57.060 [2024-07-26 06:28:08.066780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.060 [2024-07-26 06:28:08.066813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.060 qpair failed and we were unable to recover it. 00:35:57.060 [2024-07-26 06:28:08.066969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.060 [2024-07-26 06:28:08.067001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.060 qpair failed and we were unable to recover it. 00:35:57.060 [2024-07-26 06:28:08.067151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.060 [2024-07-26 06:28:08.067185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.060 qpair failed and we were unable to recover it. 00:35:57.060 [2024-07-26 06:28:08.067367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.060 [2024-07-26 06:28:08.067406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.060 qpair failed and we were unable to recover it. 00:35:57.060 [2024-07-26 06:28:08.067564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.060 [2024-07-26 06:28:08.067607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.060 qpair failed and we were unable to recover it. 00:35:57.060 [2024-07-26 06:28:08.067735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.060 [2024-07-26 06:28:08.067767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.060 qpair failed and we were unable to recover it. 00:35:57.060 [2024-07-26 06:28:08.067923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.060 [2024-07-26 06:28:08.067957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.060 qpair failed and we were unable to recover it. 00:35:57.060 [2024-07-26 06:28:08.068109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.060 [2024-07-26 06:28:08.068143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.060 qpair failed and we were unable to recover it. 00:35:57.060 [2024-07-26 06:28:08.068275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.060 [2024-07-26 06:28:08.068308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.060 qpair failed and we were unable to recover it. 00:35:57.060 [2024-07-26 06:28:08.068475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.060 [2024-07-26 06:28:08.068508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.060 qpair failed and we were unable to recover it. 00:35:57.060 [2024-07-26 06:28:08.068644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.060 [2024-07-26 06:28:08.068678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.060 qpair failed and we were unable to recover it. 00:35:57.060 [2024-07-26 06:28:08.068851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.060 [2024-07-26 06:28:08.068883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.060 qpair failed and we were unable to recover it. 00:35:57.060 [2024-07-26 06:28:08.069027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.069065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.069239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.069276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.069417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.069450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.069589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.069621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.069764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.069798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.069960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.069992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.070131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.070164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.070314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.070345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.070484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.070517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.070646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.070678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.070806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.070838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.071002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.071034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.071180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.071214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.071370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.071401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.071529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.071561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.071727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.071761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.071891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.071923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.072081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.072114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.072271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.072304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.072439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.072471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.072650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.072682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.072810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.072843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.073004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.073037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.073188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.073221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.073351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.073383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.073542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.073575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.073733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.073767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.073907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.073946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.074153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.074188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.074320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.074355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.074511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.074544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.074734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.074767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.074909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.074944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.075129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.075163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.075323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.075356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.075495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.075531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.075663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.075695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.061 [2024-07-26 06:28:08.075856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.061 [2024-07-26 06:28:08.075889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.061 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.076046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.076095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.076260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.076293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.076424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.076456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.076604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.076641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.076797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.076830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.076975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.077008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.077204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.077237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.077372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.077403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.077538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.077572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.077733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.077770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.077930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.077963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.078086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.078120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.078259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.078292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.078423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.078455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.078616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.078650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.078807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.078840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.078991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.079025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.079170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.079205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.079333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.079367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.079503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.079536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.079677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.079709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.079867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.079900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.080051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.080096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.080224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.080256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.080411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.080444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.080606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.080639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.080781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.080814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.080952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.080985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.081119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.081152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.081301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.081333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.081477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.081509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.081658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.081691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.081848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.081881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.082047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.082089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.082224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.082269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.082404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.082437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.082598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.062 [2024-07-26 06:28:08.082630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.062 qpair failed and we were unable to recover it. 00:35:57.062 [2024-07-26 06:28:08.082760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.082792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.082920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.082953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.083097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.083156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.083321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.083353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.083522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.083555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.083712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.083745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.083876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.083913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.084102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.084134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.084269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.084302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.084429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.084462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.084589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.084621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.084759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.084792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.084930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.084962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.085117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.085149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.085336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.085368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.085502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.085535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.085727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.085760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.085915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.085947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.086097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.086129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.086257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.086289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.086450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.086482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.086622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.086655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.086793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.086826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.086982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.087015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.087165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.087198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.087340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.087372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.087530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.087563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.087689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.087721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.087855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.087888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.088029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.088074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.088204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.088236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.088370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.088403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.088565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.088598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.088761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.088793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.088920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.088952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.089092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.089125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.089268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.089300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.063 [2024-07-26 06:28:08.089463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.063 [2024-07-26 06:28:08.089495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.063 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.089634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.089667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.089796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.089828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.089959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.089991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.090125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.090159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.090286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.090319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.090466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.090498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.090655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.090688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.090849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.090881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.091040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.091082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.091225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.091259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.091391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.091424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.091557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.091591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.091754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.091786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.091916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.091949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.092138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.092172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.092329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.092361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.092519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.092550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.092709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.092743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.092896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.092935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.093112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.093147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.093277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.093311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.093473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.093506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.093645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.093679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.093822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.093857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.094032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.094090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.094255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.094290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.094445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.094480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.094641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.094674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.094810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.094843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.095002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.095035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.095218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.095252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.095381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.095424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.095588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.095620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.095772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.095805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.095935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.095967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.096108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.096141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.064 [2024-07-26 06:28:08.096274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.064 [2024-07-26 06:28:08.096307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.064 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.096463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.096495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.096651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.096683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.096844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.096877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.097012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.097045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.097194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.097227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.097374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.097406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.097562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.097595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.097755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.097787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.097918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.097951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.098107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.098140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.098299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.098332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.098474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.098512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.098650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.098683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.098843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.098876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.099043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.099081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.099213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.099244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.099375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.099408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.099542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.099574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.099738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.099771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.099931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.099964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.100110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.100144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.100291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.100322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.100461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.100493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.100653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.100685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.100824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.100857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.101002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.101034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.101172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.101205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.101334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.101366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.101487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.101518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.101648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.101682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.065 qpair failed and we were unable to recover it. 00:35:57.065 [2024-07-26 06:28:08.101803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.065 [2024-07-26 06:28:08.101834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.101980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.102013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.102186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.102219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.102356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.102388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.102565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.102596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.102763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.102801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.102928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.102960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.103098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.103130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.103300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.103333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.103470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.103503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.103657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.103690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.103823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.103856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.103996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.104029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.104185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.104223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.104364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.104399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.104561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.104594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.104757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.104791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.104927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.104960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.105092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.105127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.105289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.105322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.105488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.105521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.105666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.105704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.105852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.105899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.106092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.106128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.106273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.106306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.106481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.106513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.106646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.106694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.106848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.106881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.107010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.107041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.107190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.107222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.107358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.107393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.107551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.107584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.107733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.107767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.107901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.107934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.108118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.108151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.108317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.108351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.108506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.108540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.108679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.066 [2024-07-26 06:28:08.108724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.066 qpair failed and we were unable to recover it. 00:35:57.066 [2024-07-26 06:28:08.108883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.108916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.109072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.109105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.109277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.109310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.109460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.109493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.109662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.109695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.109856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.109903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.110052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.110093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.110245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.110281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.110695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.110734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 Malloc0 00:35:57.067 [2024-07-26 06:28:08.110901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.110936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.111091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.111125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.111258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.111290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.111417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.067 [2024-07-26 06:28:08.111450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.111613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.111645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:57.067 [2024-07-26 06:28:08.111806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.111840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.111974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.112006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.067 [2024-07-26 06:28:08.112143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.112177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.112311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.112344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:57.067 [2024-07-26 06:28:08.112512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.112545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.112707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.112740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.112874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.112908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.113069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.113103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.113238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.113284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.113422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.113456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.113589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.113623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.113800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.113832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.113959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.113992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.114152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.114186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.114345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.114378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.114512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.114545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.114695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.114728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.114859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.114892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.115073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.115106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.115239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.115272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.115400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.115434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.067 [2024-07-26 06:28:08.115567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.067 [2024-07-26 06:28:08.115601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.067 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.115731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.115765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.115897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.115931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.116070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.116104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.116260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.116294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.116426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.116458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.116587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.116619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.116797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.116831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.116964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.116997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.117137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.117170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.117307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.117341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.117496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.117529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.117719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.117752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.117912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.117950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.118124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.118158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.118314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.118347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.118521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.118506] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:57.068 [2024-07-26 06:28:08.118554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.118689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.118721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.118857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.118890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.119063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.119096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.119256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.119288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.119430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.119464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.119669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.119717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.119905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.119947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.120101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.120137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.120303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.120337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.120500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.120538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.120696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.120729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.120875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.120908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.121063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.121112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.121258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.121294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.121430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.121462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.121589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.121621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.121774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.121807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.121948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.121981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.122130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.122164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.122328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.122361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.122526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.068 [2024-07-26 06:28:08.122558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.068 qpair failed and we were unable to recover it. 00:35:57.068 [2024-07-26 06:28:08.122693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.122727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.122874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.122906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.123042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.123079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.123246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.123279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.123435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.123468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.123602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.123634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.123770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.123802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.123961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.123994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.124130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.124162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.124320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.124352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.124496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.124530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.124692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.124725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.124882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.124914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.125079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.125111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.125271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.125304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.125444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.125481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.125618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.125652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.125813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.125847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.125989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.126022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.126177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.126210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.126367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.126415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.126554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.126589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.126717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.126749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.126920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.126953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.127102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.127136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.127296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.127328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.127463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.127495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.127628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.127661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.127819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.127857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.128000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.128032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.128171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.128204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.069 [2024-07-26 06:28:08.128344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.069 [2024-07-26 06:28:08.128382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.069 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.128529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.128561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.070 [2024-07-26 06:28:08.128704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.128735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:57.070 [2024-07-26 06:28:08.128873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.128906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.070 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:57.070 [2024-07-26 06:28:08.129046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.129086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.129226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.129259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.129403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.129451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.129597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.129633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.129778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.129812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.129978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.130019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.130157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.130190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.130346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.130380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.130515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.130548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.130719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.130755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.130891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.130925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.131092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.131127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.131269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.131303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.131435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.131468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.131603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.131637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.131800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.131833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.131967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.132000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.132192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.132227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.132366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.132401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.132539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.132572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.132716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.132749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.132873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.132907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.133055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.133094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.133242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.133276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.133438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.133471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.133613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.133646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.133792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.133824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.133980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.134012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.134151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.134184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.134322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.134353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.134484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.134516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.134643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.134681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.070 qpair failed and we were unable to recover it. 00:35:57.070 [2024-07-26 06:28:08.134858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.070 [2024-07-26 06:28:08.134902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.135055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.135099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.135254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.135302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.135439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.135474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.135609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.135643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.135843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.135882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.136064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.136101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.136246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.136281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.071 [2024-07-26 06:28:08.136441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.136474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:57.071 [2024-07-26 06:28:08.136622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.136655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.071 [2024-07-26 06:28:08.136784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.136817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:57.071 [2024-07-26 06:28:08.136976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.137010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.137165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.137202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.137340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.137373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.137508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.137540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.137693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.137725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.137850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.137883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.138041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.138080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.138213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.138244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.138382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.138426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.138557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.138588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.138654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:35:57.071 [2024-07-26 06:28:08.138889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.138938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.139085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.139122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.139256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.139289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.139429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.139461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.139618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.139650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.139799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.139833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.139964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.139998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.140160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.140193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.140322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.140354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.140538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.140570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.140740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.140772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.140896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.140927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.141073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.141121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.141267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.141302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.071 [2024-07-26 06:28:08.141462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.071 [2024-07-26 06:28:08.141494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.071 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.141633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.141666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.141812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.141850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.142015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.142069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.142213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.142246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.142407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.142455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.142653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.142688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.142840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.142874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.143016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.143050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.143214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.143248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.143390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.143423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.143587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.143618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.143751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.143782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.143955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.144003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.144168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.144217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.144378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.072 [2024-07-26 06:28:08.144426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:57.072 [2024-07-26 06:28:08.144572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.144610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.072 [2024-07-26 06:28:08.144747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.144779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:57.072 [2024-07-26 06:28:08.144920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.144951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.145087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.145121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.145259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.145290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.145460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.145492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.145653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.145686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.145826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.145859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.145986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.146018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.146186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.146219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.146346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.146378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.146545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.146577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.146721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.146753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.146898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.146930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.147088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.147137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.147320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.147357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.147488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.147533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.147666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.147698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.147830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.147864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.148004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.148037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.072 [2024-07-26 06:28:08.148179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.072 [2024-07-26 06:28:08.148211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.072 qpair failed and we were unable to recover it. 00:35:57.073 [2024-07-26 06:28:08.148366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:57.073 [2024-07-26 06:28:08.148400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:35:57.073 qpair failed and we were unable to recover it. 00:35:57.073 [2024-07-26 06:28:08.148613] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:57.073 [2024-07-26 06:28:08.150324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.073 [2024-07-26 06:28:08.150520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.073 [2024-07-26 06:28:08.150561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.073 [2024-07-26 06:28:08.150592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.073 [2024-07-26 06:28:08.150618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.073 [2024-07-26 06:28:08.150691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.073 qpair failed and we were unable to recover it. 00:35:57.073 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.073 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:57.073 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.073 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:57.073 [2024-07-26 06:28:08.159945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.073 [2024-07-26 06:28:08.160131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.073 [2024-07-26 06:28:08.160167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.073 [2024-07-26 06:28:08.160191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.073 [2024-07-26 06:28:08.160211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.073 [2024-07-26 06:28:08.160253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.073 qpair failed and we were unable to recover it. 00:35:57.073 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.073 06:28:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 313008 00:35:57.073 [2024-07-26 06:28:08.169976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.073 [2024-07-26 06:28:08.170139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.073 [2024-07-26 06:28:08.170174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.073 [2024-07-26 06:28:08.170197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.073 [2024-07-26 06:28:08.170216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.073 [2024-07-26 06:28:08.170258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.073 qpair failed and we were unable to recover it. 00:35:57.073 [2024-07-26 06:28:08.180045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.073 [2024-07-26 06:28:08.180218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.073 [2024-07-26 06:28:08.180252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.073 [2024-07-26 06:28:08.180276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.073 [2024-07-26 06:28:08.180295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.073 [2024-07-26 06:28:08.180336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.073 qpair failed and we were unable to recover it. 00:35:57.073 [2024-07-26 06:28:08.190021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.073 [2024-07-26 06:28:08.190212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.073 [2024-07-26 06:28:08.190251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.073 [2024-07-26 06:28:08.190275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.073 [2024-07-26 06:28:08.190294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.073 [2024-07-26 06:28:08.190350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.073 qpair failed and we were unable to recover it. 00:35:57.073 [2024-07-26 06:28:08.200008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.073 [2024-07-26 06:28:08.200170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.073 [2024-07-26 06:28:08.200204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.073 [2024-07-26 06:28:08.200228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.073 [2024-07-26 06:28:08.200247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.073 [2024-07-26 06:28:08.200287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.073 qpair failed and we were unable to recover it. 00:35:57.073 [2024-07-26 06:28:08.210009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.073 [2024-07-26 06:28:08.210163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.073 [2024-07-26 06:28:08.210197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.073 [2024-07-26 06:28:08.210219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.073 [2024-07-26 06:28:08.210238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.073 [2024-07-26 06:28:08.210279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.073 qpair failed and we were unable to recover it. 00:35:57.073 [2024-07-26 06:28:08.220083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.073 [2024-07-26 06:28:08.220256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.073 [2024-07-26 06:28:08.220290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.073 [2024-07-26 06:28:08.220314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.073 [2024-07-26 06:28:08.220332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.073 [2024-07-26 06:28:08.220373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.073 qpair failed and we were unable to recover it. 00:35:57.073 [2024-07-26 06:28:08.230129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.073 [2024-07-26 06:28:08.230278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.073 [2024-07-26 06:28:08.230314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.073 [2024-07-26 06:28:08.230337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.073 [2024-07-26 06:28:08.230362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.073 [2024-07-26 06:28:08.230426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.073 qpair failed and we were unable to recover it. 00:35:57.073 [2024-07-26 06:28:08.240124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.073 [2024-07-26 06:28:08.240289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.073 [2024-07-26 06:28:08.240330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.073 [2024-07-26 06:28:08.240354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.073 [2024-07-26 06:28:08.240374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.073 [2024-07-26 06:28:08.240414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.073 qpair failed and we were unable to recover it. 00:35:57.073 [2024-07-26 06:28:08.250128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.073 [2024-07-26 06:28:08.250283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.073 [2024-07-26 06:28:08.250316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.073 [2024-07-26 06:28:08.250340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.073 [2024-07-26 06:28:08.250358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.073 [2024-07-26 06:28:08.250399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.073 qpair failed and we were unable to recover it. 00:35:57.073 [2024-07-26 06:28:08.260162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.073 [2024-07-26 06:28:08.260323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.073 [2024-07-26 06:28:08.260356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.073 [2024-07-26 06:28:08.260379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.073 [2024-07-26 06:28:08.260398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.073 [2024-07-26 06:28:08.260439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.073 qpair failed and we were unable to recover it. 00:35:57.074 [2024-07-26 06:28:08.270199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.074 [2024-07-26 06:28:08.270400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.074 [2024-07-26 06:28:08.270434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.074 [2024-07-26 06:28:08.270457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.074 [2024-07-26 06:28:08.270475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.074 [2024-07-26 06:28:08.270517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.074 qpair failed and we were unable to recover it. 00:35:57.074 [2024-07-26 06:28:08.280251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.074 [2024-07-26 06:28:08.280402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.074 [2024-07-26 06:28:08.280435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.074 [2024-07-26 06:28:08.280458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.074 [2024-07-26 06:28:08.280477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.074 [2024-07-26 06:28:08.280531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.074 qpair failed and we were unable to recover it. 00:35:57.074 [2024-07-26 06:28:08.290301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.074 [2024-07-26 06:28:08.290492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.074 [2024-07-26 06:28:08.290526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.074 [2024-07-26 06:28:08.290549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.074 [2024-07-26 06:28:08.290568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.074 [2024-07-26 06:28:08.290608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.074 qpair failed and we were unable to recover it. 00:35:57.074 [2024-07-26 06:28:08.300315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.074 [2024-07-26 06:28:08.300501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.074 [2024-07-26 06:28:08.300535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.074 [2024-07-26 06:28:08.300557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.074 [2024-07-26 06:28:08.300576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.074 [2024-07-26 06:28:08.300617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.074 qpair failed and we were unable to recover it. 00:35:57.074 [2024-07-26 06:28:08.310285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.074 [2024-07-26 06:28:08.310435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.074 [2024-07-26 06:28:08.310468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.074 [2024-07-26 06:28:08.310491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.074 [2024-07-26 06:28:08.310511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.074 [2024-07-26 06:28:08.310551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.074 qpair failed and we were unable to recover it. 00:35:57.074 [2024-07-26 06:28:08.320390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.074 [2024-07-26 06:28:08.320546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.074 [2024-07-26 06:28:08.320581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.074 [2024-07-26 06:28:08.320609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.074 [2024-07-26 06:28:08.320629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.074 [2024-07-26 06:28:08.320670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.074 qpair failed and we were unable to recover it. 00:35:57.074 [2024-07-26 06:28:08.330399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.074 [2024-07-26 06:28:08.330552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.074 [2024-07-26 06:28:08.330586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.074 [2024-07-26 06:28:08.330609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.074 [2024-07-26 06:28:08.330627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.074 [2024-07-26 06:28:08.330667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.074 qpair failed and we were unable to recover it. 00:35:57.074 [2024-07-26 06:28:08.340424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.074 [2024-07-26 06:28:08.340634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.074 [2024-07-26 06:28:08.340667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.074 [2024-07-26 06:28:08.340690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.074 [2024-07-26 06:28:08.340710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.074 [2024-07-26 06:28:08.340750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.074 qpair failed and we were unable to recover it. 00:35:57.074 [2024-07-26 06:28:08.350496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.074 [2024-07-26 06:28:08.350652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.074 [2024-07-26 06:28:08.350686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.074 [2024-07-26 06:28:08.350710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.074 [2024-07-26 06:28:08.350728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.074 [2024-07-26 06:28:08.350769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.074 qpair failed and we were unable to recover it. 00:35:57.074 [2024-07-26 06:28:08.360508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.074 [2024-07-26 06:28:08.360659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.074 [2024-07-26 06:28:08.360692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.074 [2024-07-26 06:28:08.360715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.074 [2024-07-26 06:28:08.360734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.074 [2024-07-26 06:28:08.360775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.074 qpair failed and we were unable to recover it. 00:35:57.074 [2024-07-26 06:28:08.370455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.074 [2024-07-26 06:28:08.370614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.074 [2024-07-26 06:28:08.370652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.074 [2024-07-26 06:28:08.370677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.074 [2024-07-26 06:28:08.370696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.074 [2024-07-26 06:28:08.370738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.074 qpair failed and we were unable to recover it. 00:35:57.074 [2024-07-26 06:28:08.380679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.074 [2024-07-26 06:28:08.380841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.074 [2024-07-26 06:28:08.380876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.075 [2024-07-26 06:28:08.380900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.075 [2024-07-26 06:28:08.380920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.075 [2024-07-26 06:28:08.380962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.075 qpair failed and we were unable to recover it. 00:35:57.334 [2024-07-26 06:28:08.390566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.334 [2024-07-26 06:28:08.390767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.334 [2024-07-26 06:28:08.390801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.334 [2024-07-26 06:28:08.390824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.334 [2024-07-26 06:28:08.390843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.334 [2024-07-26 06:28:08.390898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.334 qpair failed and we were unable to recover it. 00:35:57.334 [2024-07-26 06:28:08.400603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.334 [2024-07-26 06:28:08.400769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.334 [2024-07-26 06:28:08.400803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.334 [2024-07-26 06:28:08.400827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.334 [2024-07-26 06:28:08.400847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.334 [2024-07-26 06:28:08.400887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.334 qpair failed and we were unable to recover it. 00:35:57.334 [2024-07-26 06:28:08.410674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.334 [2024-07-26 06:28:08.410822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.334 [2024-07-26 06:28:08.410856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.334 [2024-07-26 06:28:08.410885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.334 [2024-07-26 06:28:08.410905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.334 [2024-07-26 06:28:08.410945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.334 qpair failed and we were unable to recover it. 00:35:57.334 [2024-07-26 06:28:08.420621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.334 [2024-07-26 06:28:08.420778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.334 [2024-07-26 06:28:08.420813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.334 [2024-07-26 06:28:08.420836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.334 [2024-07-26 06:28:08.420855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.334 [2024-07-26 06:28:08.420896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.334 qpair failed and we were unable to recover it. 00:35:57.334 [2024-07-26 06:28:08.430707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.334 [2024-07-26 06:28:08.430857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.334 [2024-07-26 06:28:08.430900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.334 [2024-07-26 06:28:08.430925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.334 [2024-07-26 06:28:08.430944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.334 [2024-07-26 06:28:08.430984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.334 qpair failed and we were unable to recover it. 00:35:57.334 [2024-07-26 06:28:08.440703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.334 [2024-07-26 06:28:08.440865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.334 [2024-07-26 06:28:08.440899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.334 [2024-07-26 06:28:08.440923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.334 [2024-07-26 06:28:08.440942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.334 [2024-07-26 06:28:08.440982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.335 qpair failed and we were unable to recover it. 00:35:57.335 [2024-07-26 06:28:08.450701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.335 [2024-07-26 06:28:08.450849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.335 [2024-07-26 06:28:08.450882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.335 [2024-07-26 06:28:08.450906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.335 [2024-07-26 06:28:08.450925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.335 [2024-07-26 06:28:08.450965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.335 qpair failed and we were unable to recover it. 00:35:57.335 [2024-07-26 06:28:08.460783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.335 [2024-07-26 06:28:08.460931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.335 [2024-07-26 06:28:08.460964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.335 [2024-07-26 06:28:08.460988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.335 [2024-07-26 06:28:08.461007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.335 [2024-07-26 06:28:08.461070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.335 qpair failed and we were unable to recover it. 00:35:57.335 [2024-07-26 06:28:08.470792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.335 [2024-07-26 06:28:08.471000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.335 [2024-07-26 06:28:08.471035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.335 [2024-07-26 06:28:08.471073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.335 [2024-07-26 06:28:08.471097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.335 [2024-07-26 06:28:08.471138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.335 qpair failed and we were unable to recover it. 00:35:57.335 [2024-07-26 06:28:08.480824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.335 [2024-07-26 06:28:08.480968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.335 [2024-07-26 06:28:08.481002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.335 [2024-07-26 06:28:08.481026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.335 [2024-07-26 06:28:08.481045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.335 [2024-07-26 06:28:08.481095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.335 qpair failed and we were unable to recover it. 00:35:57.335 [2024-07-26 06:28:08.490916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.335 [2024-07-26 06:28:08.491121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.335 [2024-07-26 06:28:08.491159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.335 [2024-07-26 06:28:08.491184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.335 [2024-07-26 06:28:08.491203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.335 [2024-07-26 06:28:08.491244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.335 qpair failed and we were unable to recover it. 00:35:57.335 [2024-07-26 06:28:08.500894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.335 [2024-07-26 06:28:08.501039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.335 [2024-07-26 06:28:08.501086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.335 [2024-07-26 06:28:08.501112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.335 [2024-07-26 06:28:08.501131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.335 [2024-07-26 06:28:08.501172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.335 qpair failed and we were unable to recover it. 00:35:57.335 [2024-07-26 06:28:08.510950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.335 [2024-07-26 06:28:08.511099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.335 [2024-07-26 06:28:08.511133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.335 [2024-07-26 06:28:08.511156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.335 [2024-07-26 06:28:08.511174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.335 [2024-07-26 06:28:08.511215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.335 qpair failed and we were unable to recover it. 00:35:57.335 [2024-07-26 06:28:08.520954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.335 [2024-07-26 06:28:08.521151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.335 [2024-07-26 06:28:08.521185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.335 [2024-07-26 06:28:08.521209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.335 [2024-07-26 06:28:08.521228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.335 [2024-07-26 06:28:08.521282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.335 qpair failed and we were unable to recover it. 00:35:57.335 [2024-07-26 06:28:08.530936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.335 [2024-07-26 06:28:08.531130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.335 [2024-07-26 06:28:08.531164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.335 [2024-07-26 06:28:08.531187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.335 [2024-07-26 06:28:08.531205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.335 [2024-07-26 06:28:08.531246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.335 qpair failed and we were unable to recover it. 00:35:57.335 [2024-07-26 06:28:08.541020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.335 [2024-07-26 06:28:08.541184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.335 [2024-07-26 06:28:08.541218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.335 [2024-07-26 06:28:08.541255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.335 [2024-07-26 06:28:08.541274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.335 [2024-07-26 06:28:08.541321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.335 qpair failed and we were unable to recover it. 00:35:57.335 [2024-07-26 06:28:08.551079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.335 [2024-07-26 06:28:08.551242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.335 [2024-07-26 06:28:08.551275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.335 [2024-07-26 06:28:08.551297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.335 [2024-07-26 06:28:08.551316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.335 [2024-07-26 06:28:08.551357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.335 qpair failed and we were unable to recover it. 00:35:57.335 [2024-07-26 06:28:08.561172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.336 [2024-07-26 06:28:08.561329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.336 [2024-07-26 06:28:08.561367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.336 [2024-07-26 06:28:08.561390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.336 [2024-07-26 06:28:08.561409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.336 [2024-07-26 06:28:08.561459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.336 qpair failed and we were unable to recover it. 00:35:57.336 [2024-07-26 06:28:08.571143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.336 [2024-07-26 06:28:08.571323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.336 [2024-07-26 06:28:08.571367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.336 [2024-07-26 06:28:08.571391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.336 [2024-07-26 06:28:08.571410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.336 [2024-07-26 06:28:08.571460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.336 qpair failed and we were unable to recover it. 00:35:57.336 [2024-07-26 06:28:08.581120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.336 [2024-07-26 06:28:08.581280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.336 [2024-07-26 06:28:08.581313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.336 [2024-07-26 06:28:08.581336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.336 [2024-07-26 06:28:08.581365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.336 [2024-07-26 06:28:08.581406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.336 qpair failed and we were unable to recover it. 00:35:57.336 [2024-07-26 06:28:08.591226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.336 [2024-07-26 06:28:08.591398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.336 [2024-07-26 06:28:08.591440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.336 [2024-07-26 06:28:08.591465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.336 [2024-07-26 06:28:08.591484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.336 [2024-07-26 06:28:08.591525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.336 qpair failed and we were unable to recover it. 00:35:57.336 [2024-07-26 06:28:08.601205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.336 [2024-07-26 06:28:08.601361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.336 [2024-07-26 06:28:08.601394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.336 [2024-07-26 06:28:08.601417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.336 [2024-07-26 06:28:08.601436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.336 [2024-07-26 06:28:08.601477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.336 qpair failed and we were unable to recover it. 00:35:57.336 [2024-07-26 06:28:08.611213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.336 [2024-07-26 06:28:08.611362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.336 [2024-07-26 06:28:08.611396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.336 [2024-07-26 06:28:08.611427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.336 [2024-07-26 06:28:08.611445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.336 [2024-07-26 06:28:08.611496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.336 qpair failed and we were unable to recover it. 00:35:57.336 [2024-07-26 06:28:08.621250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.336 [2024-07-26 06:28:08.621419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.336 [2024-07-26 06:28:08.621452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.336 [2024-07-26 06:28:08.621475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.336 [2024-07-26 06:28:08.621494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.336 [2024-07-26 06:28:08.621540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.336 qpair failed and we were unable to recover it. 00:35:57.336 [2024-07-26 06:28:08.631236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.336 [2024-07-26 06:28:08.631397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.336 [2024-07-26 06:28:08.631430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.336 [2024-07-26 06:28:08.631453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.336 [2024-07-26 06:28:08.631477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.336 [2024-07-26 06:28:08.631518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.336 qpair failed and we were unable to recover it. 00:35:57.336 [2024-07-26 06:28:08.641318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.336 [2024-07-26 06:28:08.641468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.336 [2024-07-26 06:28:08.641502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.336 [2024-07-26 06:28:08.641524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.336 [2024-07-26 06:28:08.641543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.336 [2024-07-26 06:28:08.641583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.336 qpair failed and we were unable to recover it. 00:35:57.336 [2024-07-26 06:28:08.651347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.336 [2024-07-26 06:28:08.651512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.336 [2024-07-26 06:28:08.651545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.336 [2024-07-26 06:28:08.651568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.336 [2024-07-26 06:28:08.651587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.336 [2024-07-26 06:28:08.651627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.336 qpair failed and we were unable to recover it. 00:35:57.336 [2024-07-26 06:28:08.661357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.336 [2024-07-26 06:28:08.661507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.336 [2024-07-26 06:28:08.661539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.336 [2024-07-26 06:28:08.661562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.336 [2024-07-26 06:28:08.661581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.336 [2024-07-26 06:28:08.661621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.336 qpair failed and we were unable to recover it. 00:35:57.596 [2024-07-26 06:28:08.671449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.596 [2024-07-26 06:28:08.671618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.596 [2024-07-26 06:28:08.671656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.596 [2024-07-26 06:28:08.671681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.596 [2024-07-26 06:28:08.671700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.596 [2024-07-26 06:28:08.671741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.596 qpair failed and we were unable to recover it. 00:35:57.596 [2024-07-26 06:28:08.681434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.596 [2024-07-26 06:28:08.681587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.596 [2024-07-26 06:28:08.681621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.596 [2024-07-26 06:28:08.681644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.596 [2024-07-26 06:28:08.681663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.596 [2024-07-26 06:28:08.681703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.596 qpair failed and we were unable to recover it. 00:35:57.596 [2024-07-26 06:28:08.691389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.596 [2024-07-26 06:28:08.691533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.596 [2024-07-26 06:28:08.691565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.596 [2024-07-26 06:28:08.691588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.596 [2024-07-26 06:28:08.691608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.596 [2024-07-26 06:28:08.691648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.596 qpair failed and we were unable to recover it. 00:35:57.596 [2024-07-26 06:28:08.701527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.596 [2024-07-26 06:28:08.701728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.596 [2024-07-26 06:28:08.701761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.596 [2024-07-26 06:28:08.701784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.596 [2024-07-26 06:28:08.701803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.596 [2024-07-26 06:28:08.701843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.596 qpair failed and we were unable to recover it. 00:35:57.596 [2024-07-26 06:28:08.711476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.596 [2024-07-26 06:28:08.711630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.596 [2024-07-26 06:28:08.711664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.596 [2024-07-26 06:28:08.711687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.596 [2024-07-26 06:28:08.711706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.596 [2024-07-26 06:28:08.711746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.596 qpair failed and we were unable to recover it. 00:35:57.596 [2024-07-26 06:28:08.721579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.596 [2024-07-26 06:28:08.721740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.596 [2024-07-26 06:28:08.721773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.596 [2024-07-26 06:28:08.721801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.596 [2024-07-26 06:28:08.721821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.596 [2024-07-26 06:28:08.721862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.596 qpair failed and we were unable to recover it. 00:35:57.596 [2024-07-26 06:28:08.731569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.596 [2024-07-26 06:28:08.731719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.596 [2024-07-26 06:28:08.731752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.596 [2024-07-26 06:28:08.731775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.596 [2024-07-26 06:28:08.731793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.596 [2024-07-26 06:28:08.731834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.596 qpair failed and we were unable to recover it. 00:35:57.596 [2024-07-26 06:28:08.741630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.596 [2024-07-26 06:28:08.741791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.596 [2024-07-26 06:28:08.741825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.596 [2024-07-26 06:28:08.741848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.596 [2024-07-26 06:28:08.741867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.596 [2024-07-26 06:28:08.741920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.596 qpair failed and we were unable to recover it. 00:35:57.596 [2024-07-26 06:28:08.751642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.596 [2024-07-26 06:28:08.751834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.596 [2024-07-26 06:28:08.751867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.596 [2024-07-26 06:28:08.751890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.596 [2024-07-26 06:28:08.751909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.596 [2024-07-26 06:28:08.751949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.596 qpair failed and we were unable to recover it. 00:35:57.597 [2024-07-26 06:28:08.761669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.597 [2024-07-26 06:28:08.761815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.597 [2024-07-26 06:28:08.761849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.597 [2024-07-26 06:28:08.761873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.597 [2024-07-26 06:28:08.761892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.597 [2024-07-26 06:28:08.761932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.597 qpair failed and we were unable to recover it. 00:35:57.597 [2024-07-26 06:28:08.771684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.597 [2024-07-26 06:28:08.771832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.597 [2024-07-26 06:28:08.771866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.597 [2024-07-26 06:28:08.771889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.597 [2024-07-26 06:28:08.771907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.597 [2024-07-26 06:28:08.771947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.597 qpair failed and we were unable to recover it. 00:35:57.597 [2024-07-26 06:28:08.781723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.597 [2024-07-26 06:28:08.781877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.597 [2024-07-26 06:28:08.781910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.597 [2024-07-26 06:28:08.781933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.597 [2024-07-26 06:28:08.781952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.597 [2024-07-26 06:28:08.781993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.597 qpair failed and we were unable to recover it. 00:35:57.597 [2024-07-26 06:28:08.791702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.597 [2024-07-26 06:28:08.791895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.597 [2024-07-26 06:28:08.791928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.597 [2024-07-26 06:28:08.791951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.597 [2024-07-26 06:28:08.791970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.597 [2024-07-26 06:28:08.792010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.597 qpair failed and we were unable to recover it. 00:35:57.597 [2024-07-26 06:28:08.801805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.597 [2024-07-26 06:28:08.801946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.597 [2024-07-26 06:28:08.801979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.597 [2024-07-26 06:28:08.802002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.597 [2024-07-26 06:28:08.802021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.597 [2024-07-26 06:28:08.802085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.597 qpair failed and we were unable to recover it. 00:35:57.597 [2024-07-26 06:28:08.811788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.597 [2024-07-26 06:28:08.811943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.597 [2024-07-26 06:28:08.811976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.597 [2024-07-26 06:28:08.812005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.597 [2024-07-26 06:28:08.812024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.597 [2024-07-26 06:28:08.812071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.597 qpair failed and we were unable to recover it. 00:35:57.597 [2024-07-26 06:28:08.821841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.597 [2024-07-26 06:28:08.821998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.597 [2024-07-26 06:28:08.822036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.597 [2024-07-26 06:28:08.822069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.597 [2024-07-26 06:28:08.822091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.597 [2024-07-26 06:28:08.822132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.597 qpair failed and we were unable to recover it. 00:35:57.597 [2024-07-26 06:28:08.831881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.597 [2024-07-26 06:28:08.832031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.597 [2024-07-26 06:28:08.832071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.597 [2024-07-26 06:28:08.832096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.597 [2024-07-26 06:28:08.832115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.597 [2024-07-26 06:28:08.832156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.597 qpair failed and we were unable to recover it. 00:35:57.597 [2024-07-26 06:28:08.841908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.597 [2024-07-26 06:28:08.842078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.597 [2024-07-26 06:28:08.842111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.597 [2024-07-26 06:28:08.842135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.597 [2024-07-26 06:28:08.842154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.597 [2024-07-26 06:28:08.842195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.597 qpair failed and we were unable to recover it. 00:35:57.597 [2024-07-26 06:28:08.851882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.597 [2024-07-26 06:28:08.852088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.597 [2024-07-26 06:28:08.852122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.597 [2024-07-26 06:28:08.852145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.597 [2024-07-26 06:28:08.852164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.597 [2024-07-26 06:28:08.852205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.597 qpair failed and we were unable to recover it. 00:35:57.597 [2024-07-26 06:28:08.861991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.597 [2024-07-26 06:28:08.862168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.597 [2024-07-26 06:28:08.862202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.597 [2024-07-26 06:28:08.862225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.597 [2024-07-26 06:28:08.862244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.597 [2024-07-26 06:28:08.862284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.597 qpair failed and we were unable to recover it. 00:35:57.597 [2024-07-26 06:28:08.871992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.597 [2024-07-26 06:28:08.872161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.597 [2024-07-26 06:28:08.872194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.597 [2024-07-26 06:28:08.872217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.597 [2024-07-26 06:28:08.872236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.597 [2024-07-26 06:28:08.872276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.597 qpair failed and we were unable to recover it. 00:35:57.597 [2024-07-26 06:28:08.882048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.597 [2024-07-26 06:28:08.882240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.597 [2024-07-26 06:28:08.882273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.597 [2024-07-26 06:28:08.882296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.598 [2024-07-26 06:28:08.882315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.598 [2024-07-26 06:28:08.882355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.598 qpair failed and we were unable to recover it. 00:35:57.598 [2024-07-26 06:28:08.892029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.598 [2024-07-26 06:28:08.892187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.598 [2024-07-26 06:28:08.892221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.598 [2024-07-26 06:28:08.892244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.598 [2024-07-26 06:28:08.892262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.598 [2024-07-26 06:28:08.892303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.598 qpair failed and we were unable to recover it. 00:35:57.598 [2024-07-26 06:28:08.902030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.598 [2024-07-26 06:28:08.902195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.598 [2024-07-26 06:28:08.902234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.598 [2024-07-26 06:28:08.902258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.598 [2024-07-26 06:28:08.902277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.598 [2024-07-26 06:28:08.902317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.598 qpair failed and we were unable to recover it. 00:35:57.598 [2024-07-26 06:28:08.912283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.598 [2024-07-26 06:28:08.912427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.598 [2024-07-26 06:28:08.912460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.598 [2024-07-26 06:28:08.912483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.598 [2024-07-26 06:28:08.912501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.598 [2024-07-26 06:28:08.912542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.598 qpair failed and we were unable to recover it. 00:35:57.598 [2024-07-26 06:28:08.922121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.598 [2024-07-26 06:28:08.922270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.598 [2024-07-26 06:28:08.922303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.598 [2024-07-26 06:28:08.922326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.598 [2024-07-26 06:28:08.922345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.598 [2024-07-26 06:28:08.922386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.598 qpair failed and we were unable to recover it. 00:35:57.857 [2024-07-26 06:28:08.932177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.857 [2024-07-26 06:28:08.932354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.857 [2024-07-26 06:28:08.932392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.857 [2024-07-26 06:28:08.932416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.857 [2024-07-26 06:28:08.932436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.857 [2024-07-26 06:28:08.932476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.857 qpair failed and we were unable to recover it. 00:35:57.857 [2024-07-26 06:28:08.942220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.857 [2024-07-26 06:28:08.942413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.857 [2024-07-26 06:28:08.942446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.857 [2024-07-26 06:28:08.942470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.857 [2024-07-26 06:28:08.942489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.857 [2024-07-26 06:28:08.942535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.857 qpair failed and we were unable to recover it. 00:35:57.857 [2024-07-26 06:28:08.952182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.857 [2024-07-26 06:28:08.952359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.857 [2024-07-26 06:28:08.952394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.857 [2024-07-26 06:28:08.952422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.857 [2024-07-26 06:28:08.952443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.857 [2024-07-26 06:28:08.952484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.857 qpair failed and we were unable to recover it. 00:35:57.857 [2024-07-26 06:28:08.962231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.857 [2024-07-26 06:28:08.962389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.857 [2024-07-26 06:28:08.962422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.857 [2024-07-26 06:28:08.962445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.857 [2024-07-26 06:28:08.962463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.857 [2024-07-26 06:28:08.962503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.857 qpair failed and we were unable to recover it. 00:35:57.857 [2024-07-26 06:28:08.972411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.857 [2024-07-26 06:28:08.972563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.857 [2024-07-26 06:28:08.972597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.857 [2024-07-26 06:28:08.972620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.857 [2024-07-26 06:28:08.972638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.857 [2024-07-26 06:28:08.972678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.857 qpair failed and we were unable to recover it. 00:35:57.857 [2024-07-26 06:28:08.982447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.857 [2024-07-26 06:28:08.982609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.857 [2024-07-26 06:28:08.982642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.857 [2024-07-26 06:28:08.982665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.857 [2024-07-26 06:28:08.982684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.857 [2024-07-26 06:28:08.982724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.857 qpair failed and we were unable to recover it. 00:35:57.857 [2024-07-26 06:28:08.992283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.857 [2024-07-26 06:28:08.992455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.857 [2024-07-26 06:28:08.992493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.857 [2024-07-26 06:28:08.992516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.857 [2024-07-26 06:28:08.992535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.857 [2024-07-26 06:28:08.992575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.857 qpair failed and we were unable to recover it. 00:35:57.857 [2024-07-26 06:28:09.002374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.857 [2024-07-26 06:28:09.002521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.857 [2024-07-26 06:28:09.002555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.857 [2024-07-26 06:28:09.002578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.857 [2024-07-26 06:28:09.002597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.857 [2024-07-26 06:28:09.002651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.857 qpair failed and we were unable to recover it. 00:35:57.857 [2024-07-26 06:28:09.012319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.857 [2024-07-26 06:28:09.012469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.857 [2024-07-26 06:28:09.012502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.857 [2024-07-26 06:28:09.012525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.857 [2024-07-26 06:28:09.012545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.857 [2024-07-26 06:28:09.012590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.857 qpair failed and we were unable to recover it. 00:35:57.857 [2024-07-26 06:28:09.022398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.857 [2024-07-26 06:28:09.022552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.857 [2024-07-26 06:28:09.022585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.857 [2024-07-26 06:28:09.022608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.857 [2024-07-26 06:28:09.022627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.857 [2024-07-26 06:28:09.022668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.857 qpair failed and we were unable to recover it. 00:35:57.857 [2024-07-26 06:28:09.032374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.857 [2024-07-26 06:28:09.032549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.857 [2024-07-26 06:28:09.032582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.857 [2024-07-26 06:28:09.032604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.857 [2024-07-26 06:28:09.032628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.857 [2024-07-26 06:28:09.032668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.857 qpair failed and we were unable to recover it. 00:35:57.857 [2024-07-26 06:28:09.042453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.857 [2024-07-26 06:28:09.042608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.857 [2024-07-26 06:28:09.042641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.857 [2024-07-26 06:28:09.042665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.857 [2024-07-26 06:28:09.042684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.857 [2024-07-26 06:28:09.042724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.857 qpair failed and we were unable to recover it. 00:35:57.858 [2024-07-26 06:28:09.052465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.858 [2024-07-26 06:28:09.052620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.858 [2024-07-26 06:28:09.052655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.858 [2024-07-26 06:28:09.052677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.858 [2024-07-26 06:28:09.052710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.858 [2024-07-26 06:28:09.052752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.858 qpair failed and we were unable to recover it. 00:35:57.858 [2024-07-26 06:28:09.062519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.858 [2024-07-26 06:28:09.062674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.858 [2024-07-26 06:28:09.062707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.858 [2024-07-26 06:28:09.062729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.858 [2024-07-26 06:28:09.062748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.858 [2024-07-26 06:28:09.062787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.858 qpair failed and we were unable to recover it. 00:35:57.858 [2024-07-26 06:28:09.072590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.858 [2024-07-26 06:28:09.072766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.858 [2024-07-26 06:28:09.072801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.858 [2024-07-26 06:28:09.072824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.858 [2024-07-26 06:28:09.072843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.858 [2024-07-26 06:28:09.072896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.858 qpair failed and we were unable to recover it. 00:35:57.858 [2024-07-26 06:28:09.082565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.858 [2024-07-26 06:28:09.082712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.858 [2024-07-26 06:28:09.082745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.858 [2024-07-26 06:28:09.082768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.858 [2024-07-26 06:28:09.082787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.858 [2024-07-26 06:28:09.082827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.858 qpair failed and we were unable to recover it. 00:35:57.858 [2024-07-26 06:28:09.092561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.858 [2024-07-26 06:28:09.092704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.858 [2024-07-26 06:28:09.092737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.858 [2024-07-26 06:28:09.092760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.858 [2024-07-26 06:28:09.092779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.858 [2024-07-26 06:28:09.092819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.858 qpair failed and we were unable to recover it. 00:35:57.858 [2024-07-26 06:28:09.102622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.858 [2024-07-26 06:28:09.102791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.858 [2024-07-26 06:28:09.102824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.858 [2024-07-26 06:28:09.102847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.858 [2024-07-26 06:28:09.102866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.858 [2024-07-26 06:28:09.102906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.858 qpair failed and we were unable to recover it. 00:35:57.858 [2024-07-26 06:28:09.112668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.858 [2024-07-26 06:28:09.112834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.858 [2024-07-26 06:28:09.112871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.858 [2024-07-26 06:28:09.112895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.858 [2024-07-26 06:28:09.112914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.858 [2024-07-26 06:28:09.112955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.858 qpair failed and we were unable to recover it. 00:35:57.858 [2024-07-26 06:28:09.122683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.858 [2024-07-26 06:28:09.122833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.858 [2024-07-26 06:28:09.122865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.858 [2024-07-26 06:28:09.122887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.858 [2024-07-26 06:28:09.122910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.858 [2024-07-26 06:28:09.122950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.858 qpair failed and we were unable to recover it. 00:35:57.858 [2024-07-26 06:28:09.132703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.858 [2024-07-26 06:28:09.132854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.858 [2024-07-26 06:28:09.132887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.858 [2024-07-26 06:28:09.132911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.858 [2024-07-26 06:28:09.132929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.858 [2024-07-26 06:28:09.132969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.858 qpair failed and we were unable to recover it. 00:35:57.858 [2024-07-26 06:28:09.142704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.858 [2024-07-26 06:28:09.142857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.858 [2024-07-26 06:28:09.142889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.858 [2024-07-26 06:28:09.142912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.858 [2024-07-26 06:28:09.142931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.858 [2024-07-26 06:28:09.142971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.858 qpair failed and we were unable to recover it. 00:35:57.858 [2024-07-26 06:28:09.152775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.858 [2024-07-26 06:28:09.152926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.859 [2024-07-26 06:28:09.152959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.859 [2024-07-26 06:28:09.152982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.859 [2024-07-26 06:28:09.153002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.859 [2024-07-26 06:28:09.153042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.859 qpair failed and we were unable to recover it. 00:35:57.859 [2024-07-26 06:28:09.162788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.859 [2024-07-26 06:28:09.162948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.859 [2024-07-26 06:28:09.162980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.859 [2024-07-26 06:28:09.163003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.859 [2024-07-26 06:28:09.163022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.859 [2024-07-26 06:28:09.163074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.859 qpair failed and we were unable to recover it. 00:35:57.859 [2024-07-26 06:28:09.172822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.859 [2024-07-26 06:28:09.172981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.859 [2024-07-26 06:28:09.173014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.859 [2024-07-26 06:28:09.173038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.859 [2024-07-26 06:28:09.173057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.859 [2024-07-26 06:28:09.173107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.859 qpair failed and we were unable to recover it. 00:35:57.859 [2024-07-26 06:28:09.183104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.859 [2024-07-26 06:28:09.183267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.859 [2024-07-26 06:28:09.183301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.859 [2024-07-26 06:28:09.183324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.859 [2024-07-26 06:28:09.183344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:57.859 [2024-07-26 06:28:09.183399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.859 qpair failed and we were unable to recover it. 00:35:58.118 [2024-07-26 06:28:09.192846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.118 [2024-07-26 06:28:09.192999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.118 [2024-07-26 06:28:09.193032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.118 [2024-07-26 06:28:09.193056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.118 [2024-07-26 06:28:09.193085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.118 [2024-07-26 06:28:09.193126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.118 qpair failed and we were unable to recover it. 00:35:58.118 [2024-07-26 06:28:09.202900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.118 [2024-07-26 06:28:09.203041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.118 [2024-07-26 06:28:09.203082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.118 [2024-07-26 06:28:09.203106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.118 [2024-07-26 06:28:09.203124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.118 [2024-07-26 06:28:09.203165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.118 qpair failed and we were unable to recover it. 00:35:58.118 [2024-07-26 06:28:09.212944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.118 [2024-07-26 06:28:09.213096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.118 [2024-07-26 06:28:09.213134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.119 [2024-07-26 06:28:09.213163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.119 [2024-07-26 06:28:09.213184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.119 [2024-07-26 06:28:09.213224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.119 qpair failed and we were unable to recover it. 00:35:58.119 [2024-07-26 06:28:09.222934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.119 [2024-07-26 06:28:09.223136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.119 [2024-07-26 06:28:09.223169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.119 [2024-07-26 06:28:09.223192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.119 [2024-07-26 06:28:09.223210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.119 [2024-07-26 06:28:09.223251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.119 qpair failed and we were unable to recover it. 00:35:58.119 [2024-07-26 06:28:09.233003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.119 [2024-07-26 06:28:09.233175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.119 [2024-07-26 06:28:09.233211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.119 [2024-07-26 06:28:09.233235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.119 [2024-07-26 06:28:09.233254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.119 [2024-07-26 06:28:09.233295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.119 qpair failed and we were unable to recover it. 00:35:58.119 [2024-07-26 06:28:09.243050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.119 [2024-07-26 06:28:09.243205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.119 [2024-07-26 06:28:09.243239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.119 [2024-07-26 06:28:09.243262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.119 [2024-07-26 06:28:09.243280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.119 [2024-07-26 06:28:09.243321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.119 qpair failed and we were unable to recover it. 00:35:58.119 [2024-07-26 06:28:09.253018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.119 [2024-07-26 06:28:09.253176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.119 [2024-07-26 06:28:09.253209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.119 [2024-07-26 06:28:09.253232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.119 [2024-07-26 06:28:09.253251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.119 [2024-07-26 06:28:09.253290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.119 qpair failed and we were unable to recover it. 00:35:58.119 [2024-07-26 06:28:09.263119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.119 [2024-07-26 06:28:09.263270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.119 [2024-07-26 06:28:09.263304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.119 [2024-07-26 06:28:09.263327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.119 [2024-07-26 06:28:09.263346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.119 [2024-07-26 06:28:09.263386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.119 qpair failed and we were unable to recover it. 00:35:58.119 [2024-07-26 06:28:09.273169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.119 [2024-07-26 06:28:09.273335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.119 [2024-07-26 06:28:09.273369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.119 [2024-07-26 06:28:09.273393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.119 [2024-07-26 06:28:09.273412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.119 [2024-07-26 06:28:09.273452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.119 qpair failed and we were unable to recover it. 00:35:58.119 [2024-07-26 06:28:09.283191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.119 [2024-07-26 06:28:09.283406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.119 [2024-07-26 06:28:09.283440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.119 [2024-07-26 06:28:09.283464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.119 [2024-07-26 06:28:09.283483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.119 [2024-07-26 06:28:09.283524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.119 qpair failed and we were unable to recover it. 00:35:58.119 [2024-07-26 06:28:09.293157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.119 [2024-07-26 06:28:09.293327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.119 [2024-07-26 06:28:09.293360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.119 [2024-07-26 06:28:09.293383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.119 [2024-07-26 06:28:09.293402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.119 [2024-07-26 06:28:09.293443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.119 qpair failed and we were unable to recover it. 00:35:58.119 [2024-07-26 06:28:09.303167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.119 [2024-07-26 06:28:09.303324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.119 [2024-07-26 06:28:09.303365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.119 [2024-07-26 06:28:09.303391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.119 [2024-07-26 06:28:09.303410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.119 [2024-07-26 06:28:09.303451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.119 qpair failed and we were unable to recover it. 00:35:58.119 [2024-07-26 06:28:09.313201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.119 [2024-07-26 06:28:09.313355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.119 [2024-07-26 06:28:09.313418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.119 [2024-07-26 06:28:09.313441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.119 [2024-07-26 06:28:09.313461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.119 [2024-07-26 06:28:09.313502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.119 qpair failed and we were unable to recover it. 00:35:58.119 [2024-07-26 06:28:09.323273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.119 [2024-07-26 06:28:09.323430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.119 [2024-07-26 06:28:09.323464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.119 [2024-07-26 06:28:09.323487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.119 [2024-07-26 06:28:09.323506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.119 [2024-07-26 06:28:09.323545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.119 qpair failed and we were unable to recover it. 00:35:58.119 [2024-07-26 06:28:09.333246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.119 [2024-07-26 06:28:09.333400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.119 [2024-07-26 06:28:09.333434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.119 [2024-07-26 06:28:09.333457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.119 [2024-07-26 06:28:09.333476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.119 [2024-07-26 06:28:09.333516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.119 qpair failed and we were unable to recover it. 00:35:58.119 [2024-07-26 06:28:09.343309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.119 [2024-07-26 06:28:09.343468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.119 [2024-07-26 06:28:09.343504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.119 [2024-07-26 06:28:09.343529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.120 [2024-07-26 06:28:09.343548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.120 [2024-07-26 06:28:09.343595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.120 qpair failed and we were unable to recover it. 00:35:58.120 [2024-07-26 06:28:09.353304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.120 [2024-07-26 06:28:09.353452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.120 [2024-07-26 06:28:09.353486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.120 [2024-07-26 06:28:09.353509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.120 [2024-07-26 06:28:09.353528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.120 [2024-07-26 06:28:09.353567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.120 qpair failed and we were unable to recover it. 00:35:58.120 [2024-07-26 06:28:09.363355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.120 [2024-07-26 06:28:09.363516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.120 [2024-07-26 06:28:09.363550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.120 [2024-07-26 06:28:09.363573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.120 [2024-07-26 06:28:09.363592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.120 [2024-07-26 06:28:09.363633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.120 qpair failed and we were unable to recover it. 00:35:58.120 [2024-07-26 06:28:09.373362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.120 [2024-07-26 06:28:09.373551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.120 [2024-07-26 06:28:09.373585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.120 [2024-07-26 06:28:09.373608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.120 [2024-07-26 06:28:09.373627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.120 [2024-07-26 06:28:09.373667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.120 qpair failed and we were unable to recover it. 00:35:58.120 [2024-07-26 06:28:09.383415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.120 [2024-07-26 06:28:09.383577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.120 [2024-07-26 06:28:09.383623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.120 [2024-07-26 06:28:09.383648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.120 [2024-07-26 06:28:09.383668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.120 [2024-07-26 06:28:09.383710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.120 qpair failed and we were unable to recover it. 00:35:58.120 [2024-07-26 06:28:09.393536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.120 [2024-07-26 06:28:09.393707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.120 [2024-07-26 06:28:09.393745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.120 [2024-07-26 06:28:09.393769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.120 [2024-07-26 06:28:09.393788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.120 [2024-07-26 06:28:09.393829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.120 qpair failed and we were unable to recover it. 00:35:58.120 [2024-07-26 06:28:09.403476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.120 [2024-07-26 06:28:09.403615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.120 [2024-07-26 06:28:09.403648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.120 [2024-07-26 06:28:09.403671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.120 [2024-07-26 06:28:09.403690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.120 [2024-07-26 06:28:09.403755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.120 qpair failed and we were unable to recover it. 00:35:58.120 [2024-07-26 06:28:09.413493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.120 [2024-07-26 06:28:09.413636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.120 [2024-07-26 06:28:09.413669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.120 [2024-07-26 06:28:09.413692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.120 [2024-07-26 06:28:09.413712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.120 [2024-07-26 06:28:09.413751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.120 qpair failed and we were unable to recover it. 00:35:58.120 [2024-07-26 06:28:09.423577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.120 [2024-07-26 06:28:09.423726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.120 [2024-07-26 06:28:09.423760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.120 [2024-07-26 06:28:09.423792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.120 [2024-07-26 06:28:09.423811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.120 [2024-07-26 06:28:09.423851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.120 qpair failed and we were unable to recover it. 00:35:58.120 [2024-07-26 06:28:09.433530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.120 [2024-07-26 06:28:09.433672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.120 [2024-07-26 06:28:09.433704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.120 [2024-07-26 06:28:09.433728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.120 [2024-07-26 06:28:09.433747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.120 [2024-07-26 06:28:09.433793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.120 qpair failed and we were unable to recover it. 00:35:58.120 [2024-07-26 06:28:09.443644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.120 [2024-07-26 06:28:09.443809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.120 [2024-07-26 06:28:09.443844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.120 [2024-07-26 06:28:09.443872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.120 [2024-07-26 06:28:09.443892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.120 [2024-07-26 06:28:09.443933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.120 qpair failed and we were unable to recover it. 00:35:58.380 [2024-07-26 06:28:09.453656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.380 [2024-07-26 06:28:09.453805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.380 [2024-07-26 06:28:09.453839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.380 [2024-07-26 06:28:09.453863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.380 [2024-07-26 06:28:09.453882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.380 [2024-07-26 06:28:09.453922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.380 qpair failed and we were unable to recover it. 00:35:58.380 [2024-07-26 06:28:09.463643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.380 [2024-07-26 06:28:09.463801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.380 [2024-07-26 06:28:09.463835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.380 [2024-07-26 06:28:09.463858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.380 [2024-07-26 06:28:09.463878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.380 [2024-07-26 06:28:09.463918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.380 qpair failed and we were unable to recover it. 00:35:58.380 [2024-07-26 06:28:09.473729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.380 [2024-07-26 06:28:09.473880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.380 [2024-07-26 06:28:09.473914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.380 [2024-07-26 06:28:09.473938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.380 [2024-07-26 06:28:09.473957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.380 [2024-07-26 06:28:09.473997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.380 qpair failed and we were unable to recover it. 00:35:58.380 [2024-07-26 06:28:09.483763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.380 [2024-07-26 06:28:09.483929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.380 [2024-07-26 06:28:09.483962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.380 [2024-07-26 06:28:09.483985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.380 [2024-07-26 06:28:09.484004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.380 [2024-07-26 06:28:09.484045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.380 qpair failed and we were unable to recover it. 00:35:58.380 [2024-07-26 06:28:09.493777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.380 [2024-07-26 06:28:09.493941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.380 [2024-07-26 06:28:09.493975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.380 [2024-07-26 06:28:09.493998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.380 [2024-07-26 06:28:09.494017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.380 [2024-07-26 06:28:09.494057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.380 qpair failed and we were unable to recover it. 00:35:58.380 [2024-07-26 06:28:09.503859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.380 [2024-07-26 06:28:09.504021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.380 [2024-07-26 06:28:09.504055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.380 [2024-07-26 06:28:09.504104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.380 [2024-07-26 06:28:09.504124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.380 [2024-07-26 06:28:09.504164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.381 qpair failed and we were unable to recover it. 00:35:58.381 [2024-07-26 06:28:09.513884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.381 [2024-07-26 06:28:09.514040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.381 [2024-07-26 06:28:09.514082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.381 [2024-07-26 06:28:09.514116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.381 [2024-07-26 06:28:09.514135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.381 [2024-07-26 06:28:09.514175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.381 qpair failed and we were unable to recover it. 00:35:58.381 [2024-07-26 06:28:09.523874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.381 [2024-07-26 06:28:09.524027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.381 [2024-07-26 06:28:09.524069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.381 [2024-07-26 06:28:09.524104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.381 [2024-07-26 06:28:09.524129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.381 [2024-07-26 06:28:09.524171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.381 qpair failed and we were unable to recover it. 00:35:58.381 [2024-07-26 06:28:09.533912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.381 [2024-07-26 06:28:09.534076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.381 [2024-07-26 06:28:09.534110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.381 [2024-07-26 06:28:09.534133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.381 [2024-07-26 06:28:09.534152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.381 [2024-07-26 06:28:09.534205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.381 qpair failed and we were unable to recover it. 00:35:58.381 [2024-07-26 06:28:09.543919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.381 [2024-07-26 06:28:09.544087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.381 [2024-07-26 06:28:09.544121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.381 [2024-07-26 06:28:09.544145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.381 [2024-07-26 06:28:09.544164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.381 [2024-07-26 06:28:09.544205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.381 qpair failed and we were unable to recover it. 00:35:58.381 [2024-07-26 06:28:09.553969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.381 [2024-07-26 06:28:09.554141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.381 [2024-07-26 06:28:09.554175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.381 [2024-07-26 06:28:09.554198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.381 [2024-07-26 06:28:09.554216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.381 [2024-07-26 06:28:09.554258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.381 qpair failed and we were unable to recover it. 00:35:58.381 [2024-07-26 06:28:09.563991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.381 [2024-07-26 06:28:09.564144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.381 [2024-07-26 06:28:09.564178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.381 [2024-07-26 06:28:09.564202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.381 [2024-07-26 06:28:09.564220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.381 [2024-07-26 06:28:09.564276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.381 qpair failed and we were unable to recover it. 00:35:58.381 [2024-07-26 06:28:09.573940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.381 [2024-07-26 06:28:09.574131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.381 [2024-07-26 06:28:09.574164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.381 [2024-07-26 06:28:09.574187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.381 [2024-07-26 06:28:09.574206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.381 [2024-07-26 06:28:09.574261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.381 qpair failed and we were unable to recover it. 00:35:58.381 [2024-07-26 06:28:09.584089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.381 [2024-07-26 06:28:09.584253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.381 [2024-07-26 06:28:09.584286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.381 [2024-07-26 06:28:09.584309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.381 [2024-07-26 06:28:09.584328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.381 [2024-07-26 06:28:09.584368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.381 qpair failed and we were unable to recover it. 00:35:58.381 [2024-07-26 06:28:09.594065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.381 [2024-07-26 06:28:09.594263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.381 [2024-07-26 06:28:09.594296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.381 [2024-07-26 06:28:09.594319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.381 [2024-07-26 06:28:09.594338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.381 [2024-07-26 06:28:09.594378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.381 qpair failed and we were unable to recover it. 00:35:58.381 [2024-07-26 06:28:09.604100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.381 [2024-07-26 06:28:09.604274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.381 [2024-07-26 06:28:09.604313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.381 [2024-07-26 06:28:09.604336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.381 [2024-07-26 06:28:09.604355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.381 [2024-07-26 06:28:09.604396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.381 qpair failed and we were unable to recover it. 00:35:58.381 [2024-07-26 06:28:09.614137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.381 [2024-07-26 06:28:09.614286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.381 [2024-07-26 06:28:09.614320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.381 [2024-07-26 06:28:09.614349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.381 [2024-07-26 06:28:09.614369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.381 [2024-07-26 06:28:09.614409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.381 qpair failed and we were unable to recover it. 00:35:58.381 [2024-07-26 06:28:09.624111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.381 [2024-07-26 06:28:09.624304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.381 [2024-07-26 06:28:09.624337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.381 [2024-07-26 06:28:09.624360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.381 [2024-07-26 06:28:09.624379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.381 [2024-07-26 06:28:09.624419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.381 qpair failed and we were unable to recover it. 00:35:58.381 [2024-07-26 06:28:09.634127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.381 [2024-07-26 06:28:09.634298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.381 [2024-07-26 06:28:09.634332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.381 [2024-07-26 06:28:09.634355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.381 [2024-07-26 06:28:09.634374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.381 [2024-07-26 06:28:09.634414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.382 qpair failed and we were unable to recover it. 00:35:58.382 [2024-07-26 06:28:09.644181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.382 [2024-07-26 06:28:09.644328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.382 [2024-07-26 06:28:09.644361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.382 [2024-07-26 06:28:09.644384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.382 [2024-07-26 06:28:09.644404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.382 [2024-07-26 06:28:09.644444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.382 qpair failed and we were unable to recover it. 00:35:58.382 [2024-07-26 06:28:09.654146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.382 [2024-07-26 06:28:09.654313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.382 [2024-07-26 06:28:09.654347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.382 [2024-07-26 06:28:09.654370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.382 [2024-07-26 06:28:09.654388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.382 [2024-07-26 06:28:09.654429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.382 qpair failed and we were unable to recover it. 00:35:58.382 [2024-07-26 06:28:09.664268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.382 [2024-07-26 06:28:09.664422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.382 [2024-07-26 06:28:09.664455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.382 [2024-07-26 06:28:09.664477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.382 [2024-07-26 06:28:09.664496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.382 [2024-07-26 06:28:09.664536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.382 qpair failed and we were unable to recover it. 00:35:58.382 [2024-07-26 06:28:09.674246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.382 [2024-07-26 06:28:09.674440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.382 [2024-07-26 06:28:09.674474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.382 [2024-07-26 06:28:09.674497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.382 [2024-07-26 06:28:09.674516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.382 [2024-07-26 06:28:09.674557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.382 qpair failed and we were unable to recover it. 00:35:58.382 [2024-07-26 06:28:09.684320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.382 [2024-07-26 06:28:09.684473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.382 [2024-07-26 06:28:09.684506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.382 [2024-07-26 06:28:09.684530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.382 [2024-07-26 06:28:09.684549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.382 [2024-07-26 06:28:09.684589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.382 qpair failed and we were unable to recover it. 00:35:58.382 [2024-07-26 06:28:09.694327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.382 [2024-07-26 06:28:09.694476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.382 [2024-07-26 06:28:09.694509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.382 [2024-07-26 06:28:09.694532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.382 [2024-07-26 06:28:09.694551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.382 [2024-07-26 06:28:09.694591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.382 qpair failed and we were unable to recover it. 00:35:58.382 [2024-07-26 06:28:09.704342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.382 [2024-07-26 06:28:09.704498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.382 [2024-07-26 06:28:09.704536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.382 [2024-07-26 06:28:09.704561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.382 [2024-07-26 06:28:09.704580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.382 [2024-07-26 06:28:09.704620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.382 qpair failed and we were unable to recover it. 00:35:58.643 [2024-07-26 06:28:09.714404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.643 [2024-07-26 06:28:09.714571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.643 [2024-07-26 06:28:09.714605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.643 [2024-07-26 06:28:09.714628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.643 [2024-07-26 06:28:09.714647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.643 [2024-07-26 06:28:09.714687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.643 qpair failed and we were unable to recover it. 00:35:58.643 [2024-07-26 06:28:09.724397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.643 [2024-07-26 06:28:09.724545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.643 [2024-07-26 06:28:09.724579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.643 [2024-07-26 06:28:09.724603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.643 [2024-07-26 06:28:09.724622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.643 [2024-07-26 06:28:09.724663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.643 qpair failed and we were unable to recover it. 00:35:58.643 [2024-07-26 06:28:09.734475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.643 [2024-07-26 06:28:09.734616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.643 [2024-07-26 06:28:09.734649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.643 [2024-07-26 06:28:09.734672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.643 [2024-07-26 06:28:09.734691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.643 [2024-07-26 06:28:09.734731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.643 qpair failed and we were unable to recover it. 00:35:58.643 [2024-07-26 06:28:09.744500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.643 [2024-07-26 06:28:09.744658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.643 [2024-07-26 06:28:09.744692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.643 [2024-07-26 06:28:09.744715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.643 [2024-07-26 06:28:09.744734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.643 [2024-07-26 06:28:09.744793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.643 qpair failed and we were unable to recover it. 00:35:58.643 [2024-07-26 06:28:09.754463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.643 [2024-07-26 06:28:09.754615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.643 [2024-07-26 06:28:09.754649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.643 [2024-07-26 06:28:09.754672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.643 [2024-07-26 06:28:09.754691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.643 [2024-07-26 06:28:09.754731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.643 qpair failed and we were unable to recover it. 00:35:58.643 [2024-07-26 06:28:09.764554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.643 [2024-07-26 06:28:09.764743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.643 [2024-07-26 06:28:09.764777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.643 [2024-07-26 06:28:09.764800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.643 [2024-07-26 06:28:09.764818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.643 [2024-07-26 06:28:09.764871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.643 qpair failed and we were unable to recover it. 00:35:58.643 [2024-07-26 06:28:09.774549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.643 [2024-07-26 06:28:09.774700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.644 [2024-07-26 06:28:09.774733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.644 [2024-07-26 06:28:09.774756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.644 [2024-07-26 06:28:09.774775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.644 [2024-07-26 06:28:09.774815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.644 qpair failed and we were unable to recover it. 00:35:58.644 [2024-07-26 06:28:09.784561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.644 [2024-07-26 06:28:09.784762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.644 [2024-07-26 06:28:09.784795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.644 [2024-07-26 06:28:09.784818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.644 [2024-07-26 06:28:09.784836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.644 [2024-07-26 06:28:09.784877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.644 qpair failed and we were unable to recover it. 00:35:58.644 [2024-07-26 06:28:09.794670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.644 [2024-07-26 06:28:09.794850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.644 [2024-07-26 06:28:09.794888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.644 [2024-07-26 06:28:09.794912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.644 [2024-07-26 06:28:09.794931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.644 [2024-07-26 06:28:09.794991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.644 qpair failed and we were unable to recover it. 00:35:58.644 [2024-07-26 06:28:09.804694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.644 [2024-07-26 06:28:09.804839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.644 [2024-07-26 06:28:09.804872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.644 [2024-07-26 06:28:09.804895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.644 [2024-07-26 06:28:09.804914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.644 [2024-07-26 06:28:09.804967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.644 qpair failed and we were unable to recover it. 00:35:58.644 [2024-07-26 06:28:09.814676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.644 [2024-07-26 06:28:09.814820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.644 [2024-07-26 06:28:09.814853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.644 [2024-07-26 06:28:09.814876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.644 [2024-07-26 06:28:09.814895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.644 [2024-07-26 06:28:09.814935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.644 qpair failed and we were unable to recover it. 00:35:58.644 [2024-07-26 06:28:09.824759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.644 [2024-07-26 06:28:09.824915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.644 [2024-07-26 06:28:09.824948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.644 [2024-07-26 06:28:09.824986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.644 [2024-07-26 06:28:09.825005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.644 [2024-07-26 06:28:09.825046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.644 qpair failed and we were unable to recover it. 00:35:58.644 [2024-07-26 06:28:09.834765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.644 [2024-07-26 06:28:09.834909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.644 [2024-07-26 06:28:09.834942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.644 [2024-07-26 06:28:09.834965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.644 [2024-07-26 06:28:09.834985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.644 [2024-07-26 06:28:09.835030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.644 qpair failed and we were unable to recover it. 00:35:58.644 [2024-07-26 06:28:09.844770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.644 [2024-07-26 06:28:09.844915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.644 [2024-07-26 06:28:09.844948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.644 [2024-07-26 06:28:09.844971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.644 [2024-07-26 06:28:09.844991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.644 [2024-07-26 06:28:09.845030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.644 qpair failed and we were unable to recover it. 00:35:58.644 [2024-07-26 06:28:09.854834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.644 [2024-07-26 06:28:09.854982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.644 [2024-07-26 06:28:09.855015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.644 [2024-07-26 06:28:09.855037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.644 [2024-07-26 06:28:09.855056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.644 [2024-07-26 06:28:09.855108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.644 qpair failed and we were unable to recover it. 00:35:58.644 [2024-07-26 06:28:09.864844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.644 [2024-07-26 06:28:09.865003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.644 [2024-07-26 06:28:09.865036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.644 [2024-07-26 06:28:09.865066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.644 [2024-07-26 06:28:09.865088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.644 [2024-07-26 06:28:09.865129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.644 qpair failed and we were unable to recover it. 00:35:58.644 [2024-07-26 06:28:09.874924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.644 [2024-07-26 06:28:09.875083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.644 [2024-07-26 06:28:09.875117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.644 [2024-07-26 06:28:09.875140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.644 [2024-07-26 06:28:09.875159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.644 [2024-07-26 06:28:09.875199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.644 qpair failed and we were unable to recover it. 00:35:58.644 [2024-07-26 06:28:09.884885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.644 [2024-07-26 06:28:09.885042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.644 [2024-07-26 06:28:09.885081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.644 [2024-07-26 06:28:09.885105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.644 [2024-07-26 06:28:09.885124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.644 [2024-07-26 06:28:09.885166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.644 qpair failed and we were unable to recover it. 00:35:58.644 [2024-07-26 06:28:09.894905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.644 [2024-07-26 06:28:09.895083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.644 [2024-07-26 06:28:09.895119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.644 [2024-07-26 06:28:09.895143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.644 [2024-07-26 06:28:09.895162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.644 [2024-07-26 06:28:09.895203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.644 qpair failed and we were unable to recover it. 00:35:58.644 [2024-07-26 06:28:09.904979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.644 [2024-07-26 06:28:09.905139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.644 [2024-07-26 06:28:09.905173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.644 [2024-07-26 06:28:09.905196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.645 [2024-07-26 06:28:09.905215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.645 [2024-07-26 06:28:09.905255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.645 qpair failed and we were unable to recover it. 00:35:58.645 [2024-07-26 06:28:09.914980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.645 [2024-07-26 06:28:09.915179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.645 [2024-07-26 06:28:09.915212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.645 [2024-07-26 06:28:09.915235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.645 [2024-07-26 06:28:09.915254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.645 [2024-07-26 06:28:09.915294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.645 qpair failed and we were unable to recover it. 00:35:58.645 [2024-07-26 06:28:09.925019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.645 [2024-07-26 06:28:09.925180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.645 [2024-07-26 06:28:09.925214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.645 [2024-07-26 06:28:09.925237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.645 [2024-07-26 06:28:09.925262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.645 [2024-07-26 06:28:09.925303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.645 qpair failed and we were unable to recover it. 00:35:58.645 [2024-07-26 06:28:09.935038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.645 [2024-07-26 06:28:09.935194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.645 [2024-07-26 06:28:09.935227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.645 [2024-07-26 06:28:09.935250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.645 [2024-07-26 06:28:09.935268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.645 [2024-07-26 06:28:09.935309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.645 qpair failed and we were unable to recover it. 00:35:58.645 [2024-07-26 06:28:09.945044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.645 [2024-07-26 06:28:09.945209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.645 [2024-07-26 06:28:09.945241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.645 [2024-07-26 06:28:09.945264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.645 [2024-07-26 06:28:09.945283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.645 [2024-07-26 06:28:09.945323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.645 qpair failed and we were unable to recover it. 00:35:58.645 [2024-07-26 06:28:09.955098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.645 [2024-07-26 06:28:09.955265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.645 [2024-07-26 06:28:09.955299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.645 [2024-07-26 06:28:09.955322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.645 [2024-07-26 06:28:09.955341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.645 [2024-07-26 06:28:09.955381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.645 qpair failed and we were unable to recover it. 00:35:58.645 [2024-07-26 06:28:09.965169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.645 [2024-07-26 06:28:09.965314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.645 [2024-07-26 06:28:09.965348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.645 [2024-07-26 06:28:09.965372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.645 [2024-07-26 06:28:09.965391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.645 [2024-07-26 06:28:09.965443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.645 qpair failed and we were unable to recover it. 00:35:58.645 [2024-07-26 06:28:09.975138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.645 [2024-07-26 06:28:09.975300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.645 [2024-07-26 06:28:09.975334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.645 [2024-07-26 06:28:09.975358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.645 [2024-07-26 06:28:09.975377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.645 [2024-07-26 06:28:09.975417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.645 qpair failed and we were unable to recover it. 00:35:58.906 [2024-07-26 06:28:09.985200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.906 [2024-07-26 06:28:09.985367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.906 [2024-07-26 06:28:09.985400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.906 [2024-07-26 06:28:09.985423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.906 [2024-07-26 06:28:09.985442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.906 [2024-07-26 06:28:09.985483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.906 qpair failed and we were unable to recover it. 00:35:58.906 [2024-07-26 06:28:09.995237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.906 [2024-07-26 06:28:09.995396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.906 [2024-07-26 06:28:09.995435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.906 [2024-07-26 06:28:09.995458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.906 [2024-07-26 06:28:09.995477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.906 [2024-07-26 06:28:09.995517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.906 qpair failed and we were unable to recover it. 00:35:58.906 [2024-07-26 06:28:10.005324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.906 [2024-07-26 06:28:10.005474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.906 [2024-07-26 06:28:10.005510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.906 [2024-07-26 06:28:10.005535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.906 [2024-07-26 06:28:10.005554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.906 [2024-07-26 06:28:10.005597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.906 qpair failed and we were unable to recover it. 00:35:58.906 [2024-07-26 06:28:10.015343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.906 [2024-07-26 06:28:10.015552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.907 [2024-07-26 06:28:10.015588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.907 [2024-07-26 06:28:10.015621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.907 [2024-07-26 06:28:10.015643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.907 [2024-07-26 06:28:10.015687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.907 qpair failed and we were unable to recover it. 00:35:58.907 [2024-07-26 06:28:10.025316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.907 [2024-07-26 06:28:10.025476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.907 [2024-07-26 06:28:10.025511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.907 [2024-07-26 06:28:10.025535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.907 [2024-07-26 06:28:10.025554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.907 [2024-07-26 06:28:10.025598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.907 qpair failed and we were unable to recover it. 00:35:58.907 [2024-07-26 06:28:10.035397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.907 [2024-07-26 06:28:10.035593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.907 [2024-07-26 06:28:10.035628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.907 [2024-07-26 06:28:10.035652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.907 [2024-07-26 06:28:10.035671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.907 [2024-07-26 06:28:10.035713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.907 qpair failed and we were unable to recover it. 00:35:58.907 [2024-07-26 06:28:10.045543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.907 [2024-07-26 06:28:10.045717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.907 [2024-07-26 06:28:10.045758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.907 [2024-07-26 06:28:10.045783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.907 [2024-07-26 06:28:10.045803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.907 [2024-07-26 06:28:10.045852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.907 qpair failed and we were unable to recover it. 00:35:58.907 [2024-07-26 06:28:10.055458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.907 [2024-07-26 06:28:10.055614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.907 [2024-07-26 06:28:10.055649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.907 [2024-07-26 06:28:10.055673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.907 [2024-07-26 06:28:10.055692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.907 [2024-07-26 06:28:10.055747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.907 qpair failed and we were unable to recover it. 00:35:58.907 [2024-07-26 06:28:10.065409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.907 [2024-07-26 06:28:10.065585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.907 [2024-07-26 06:28:10.065619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.907 [2024-07-26 06:28:10.065643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.907 [2024-07-26 06:28:10.065662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.907 [2024-07-26 06:28:10.065703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.907 qpair failed and we were unable to recover it. 00:35:58.907 [2024-07-26 06:28:10.075428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.907 [2024-07-26 06:28:10.075575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.907 [2024-07-26 06:28:10.075608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.907 [2024-07-26 06:28:10.075631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.907 [2024-07-26 06:28:10.075650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.907 [2024-07-26 06:28:10.075691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.907 qpair failed and we were unable to recover it. 00:35:58.907 [2024-07-26 06:28:10.085485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.907 [2024-07-26 06:28:10.085629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.907 [2024-07-26 06:28:10.085663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.907 [2024-07-26 06:28:10.085686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.907 [2024-07-26 06:28:10.085705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.907 [2024-07-26 06:28:10.085745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.907 qpair failed and we were unable to recover it. 00:35:58.907 [2024-07-26 06:28:10.095496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.907 [2024-07-26 06:28:10.095684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.907 [2024-07-26 06:28:10.095717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.907 [2024-07-26 06:28:10.095741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.907 [2024-07-26 06:28:10.095760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.907 [2024-07-26 06:28:10.095801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.907 qpair failed and we were unable to recover it. 00:35:58.907 [2024-07-26 06:28:10.105542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.907 [2024-07-26 06:28:10.105715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.907 [2024-07-26 06:28:10.105748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.907 [2024-07-26 06:28:10.105776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.907 [2024-07-26 06:28:10.105797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.907 [2024-07-26 06:28:10.105838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.907 qpair failed and we were unable to recover it. 00:35:58.907 [2024-07-26 06:28:10.115586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.907 [2024-07-26 06:28:10.115732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.907 [2024-07-26 06:28:10.115765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.907 [2024-07-26 06:28:10.115789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.907 [2024-07-26 06:28:10.115808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.907 [2024-07-26 06:28:10.115862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.907 qpair failed and we were unable to recover it. 00:35:58.907 [2024-07-26 06:28:10.125635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.907 [2024-07-26 06:28:10.125799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.907 [2024-07-26 06:28:10.125831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.907 [2024-07-26 06:28:10.125853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.907 [2024-07-26 06:28:10.125871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.907 [2024-07-26 06:28:10.125911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.907 qpair failed and we were unable to recover it. 00:35:58.907 [2024-07-26 06:28:10.135602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.907 [2024-07-26 06:28:10.135761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.907 [2024-07-26 06:28:10.135795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.907 [2024-07-26 06:28:10.135818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.907 [2024-07-26 06:28:10.135837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.907 [2024-07-26 06:28:10.135877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.907 qpair failed and we were unable to recover it. 00:35:58.907 [2024-07-26 06:28:10.145686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.907 [2024-07-26 06:28:10.145882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.907 [2024-07-26 06:28:10.145915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.907 [2024-07-26 06:28:10.145938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.908 [2024-07-26 06:28:10.145957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.908 [2024-07-26 06:28:10.145997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.908 qpair failed and we were unable to recover it. 00:35:58.908 [2024-07-26 06:28:10.155675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.908 [2024-07-26 06:28:10.155831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.908 [2024-07-26 06:28:10.155864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.908 [2024-07-26 06:28:10.155887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.908 [2024-07-26 06:28:10.155906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.908 [2024-07-26 06:28:10.155946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.908 qpair failed and we were unable to recover it. 00:35:58.908 [2024-07-26 06:28:10.165742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.908 [2024-07-26 06:28:10.165932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.908 [2024-07-26 06:28:10.165965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.908 [2024-07-26 06:28:10.165988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.908 [2024-07-26 06:28:10.166008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.908 [2024-07-26 06:28:10.166069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.908 qpair failed and we were unable to recover it. 00:35:58.908 [2024-07-26 06:28:10.175754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.908 [2024-07-26 06:28:10.175898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.908 [2024-07-26 06:28:10.175931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.908 [2024-07-26 06:28:10.175954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.908 [2024-07-26 06:28:10.175973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.908 [2024-07-26 06:28:10.176013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.908 qpair failed and we were unable to recover it. 00:35:58.908 [2024-07-26 06:28:10.185763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.908 [2024-07-26 06:28:10.185941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.908 [2024-07-26 06:28:10.185975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.908 [2024-07-26 06:28:10.186003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.908 [2024-07-26 06:28:10.186023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.908 [2024-07-26 06:28:10.186080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.908 qpair failed and we were unable to recover it. 00:35:58.908 [2024-07-26 06:28:10.195803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.908 [2024-07-26 06:28:10.195951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.908 [2024-07-26 06:28:10.195991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.908 [2024-07-26 06:28:10.196015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.908 [2024-07-26 06:28:10.196034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.908 [2024-07-26 06:28:10.196082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.908 qpair failed and we were unable to recover it. 00:35:58.908 [2024-07-26 06:28:10.205808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.908 [2024-07-26 06:28:10.205964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.908 [2024-07-26 06:28:10.205997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.908 [2024-07-26 06:28:10.206020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.908 [2024-07-26 06:28:10.206039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.908 [2024-07-26 06:28:10.206088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.908 qpair failed and we were unable to recover it. 00:35:58.908 [2024-07-26 06:28:10.215825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.908 [2024-07-26 06:28:10.215984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.908 [2024-07-26 06:28:10.216017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.908 [2024-07-26 06:28:10.216041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.908 [2024-07-26 06:28:10.216067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.908 [2024-07-26 06:28:10.216111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.908 qpair failed and we were unable to recover it. 00:35:58.908 [2024-07-26 06:28:10.225893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.908 [2024-07-26 06:28:10.226096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.908 [2024-07-26 06:28:10.226130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.908 [2024-07-26 06:28:10.226153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.908 [2024-07-26 06:28:10.226173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.908 [2024-07-26 06:28:10.226213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.908 qpair failed and we were unable to recover it. 00:35:58.908 [2024-07-26 06:28:10.235913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.908 [2024-07-26 06:28:10.236086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.908 [2024-07-26 06:28:10.236120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.908 [2024-07-26 06:28:10.236143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.908 [2024-07-26 06:28:10.236162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:58.908 [2024-07-26 06:28:10.236208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.908 qpair failed and we were unable to recover it. 00:35:59.168 [2024-07-26 06:28:10.245923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.168 [2024-07-26 06:28:10.246088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.168 [2024-07-26 06:28:10.246122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.168 [2024-07-26 06:28:10.246145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.168 [2024-07-26 06:28:10.246164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.168 [2024-07-26 06:28:10.246205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.168 qpair failed and we were unable to recover it. 00:35:59.168 [2024-07-26 06:28:10.255953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.168 [2024-07-26 06:28:10.256130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.168 [2024-07-26 06:28:10.256164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.168 [2024-07-26 06:28:10.256187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.168 [2024-07-26 06:28:10.256206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.168 [2024-07-26 06:28:10.256246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.168 qpair failed and we were unable to recover it. 00:35:59.168 [2024-07-26 06:28:10.265933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.168 [2024-07-26 06:28:10.266097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.168 [2024-07-26 06:28:10.266130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.168 [2024-07-26 06:28:10.266154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.168 [2024-07-26 06:28:10.266173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.168 [2024-07-26 06:28:10.266214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.168 qpair failed and we were unable to recover it. 00:35:59.168 [2024-07-26 06:28:10.276009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.168 [2024-07-26 06:28:10.276175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.168 [2024-07-26 06:28:10.276209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.168 [2024-07-26 06:28:10.276232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.168 [2024-07-26 06:28:10.276251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.168 [2024-07-26 06:28:10.276291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.168 qpair failed and we were unable to recover it. 00:35:59.168 [2024-07-26 06:28:10.286068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.168 [2024-07-26 06:28:10.286221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.168 [2024-07-26 06:28:10.286264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.168 [2024-07-26 06:28:10.286289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.168 [2024-07-26 06:28:10.286309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.168 [2024-07-26 06:28:10.286349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.168 qpair failed and we were unable to recover it. 00:35:59.168 [2024-07-26 06:28:10.296000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.168 [2024-07-26 06:28:10.296154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.168 [2024-07-26 06:28:10.296189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.168 [2024-07-26 06:28:10.296214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.168 [2024-07-26 06:28:10.296233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.168 [2024-07-26 06:28:10.296273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.168 qpair failed and we were unable to recover it. 00:35:59.168 [2024-07-26 06:28:10.306082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.168 [2024-07-26 06:28:10.306238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.168 [2024-07-26 06:28:10.306271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.168 [2024-07-26 06:28:10.306294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.168 [2024-07-26 06:28:10.306314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.168 [2024-07-26 06:28:10.306354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.168 qpair failed and we were unable to recover it. 00:35:59.168 [2024-07-26 06:28:10.316093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.168 [2024-07-26 06:28:10.316249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.168 [2024-07-26 06:28:10.316282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.169 [2024-07-26 06:28:10.316305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.169 [2024-07-26 06:28:10.316324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.169 [2024-07-26 06:28:10.316365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.169 qpair failed and we were unable to recover it. 00:35:59.169 [2024-07-26 06:28:10.326151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.169 [2024-07-26 06:28:10.326297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.169 [2024-07-26 06:28:10.326330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.169 [2024-07-26 06:28:10.326352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.169 [2024-07-26 06:28:10.326377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.169 [2024-07-26 06:28:10.326418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.169 qpair failed and we were unable to recover it. 00:35:59.169 [2024-07-26 06:28:10.336154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.169 [2024-07-26 06:28:10.336296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.169 [2024-07-26 06:28:10.336329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.169 [2024-07-26 06:28:10.336352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.169 [2024-07-26 06:28:10.336385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.169 [2024-07-26 06:28:10.336426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.169 qpair failed and we were unable to recover it. 00:35:59.169 [2024-07-26 06:28:10.346235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.169 [2024-07-26 06:28:10.346431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.169 [2024-07-26 06:28:10.346464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.169 [2024-07-26 06:28:10.346488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.169 [2024-07-26 06:28:10.346507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.169 [2024-07-26 06:28:10.346547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.169 qpair failed and we were unable to recover it. 00:35:59.169 [2024-07-26 06:28:10.356402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.169 [2024-07-26 06:28:10.356552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.169 [2024-07-26 06:28:10.356586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.169 [2024-07-26 06:28:10.356609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.169 [2024-07-26 06:28:10.356628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.169 [2024-07-26 06:28:10.356668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.169 qpair failed and we were unable to recover it. 00:35:59.169 [2024-07-26 06:28:10.366262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.169 [2024-07-26 06:28:10.366420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.169 [2024-07-26 06:28:10.366453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.169 [2024-07-26 06:28:10.366475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.169 [2024-07-26 06:28:10.366494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.169 [2024-07-26 06:28:10.366535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.169 qpair failed and we were unable to recover it. 00:35:59.169 [2024-07-26 06:28:10.376286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.169 [2024-07-26 06:28:10.376479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.169 [2024-07-26 06:28:10.376513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.169 [2024-07-26 06:28:10.376536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.169 [2024-07-26 06:28:10.376555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.169 [2024-07-26 06:28:10.376610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.169 qpair failed and we were unable to recover it. 00:35:59.169 [2024-07-26 06:28:10.386339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.169 [2024-07-26 06:28:10.386495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.169 [2024-07-26 06:28:10.386534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.169 [2024-07-26 06:28:10.386558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.169 [2024-07-26 06:28:10.386577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.169 [2024-07-26 06:28:10.386617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.169 qpair failed and we were unable to recover it. 00:35:59.169 [2024-07-26 06:28:10.396332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.169 [2024-07-26 06:28:10.396492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.169 [2024-07-26 06:28:10.396525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.169 [2024-07-26 06:28:10.396548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.169 [2024-07-26 06:28:10.396567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.169 [2024-07-26 06:28:10.396608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.169 qpair failed and we were unable to recover it. 00:35:59.169 [2024-07-26 06:28:10.406352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.169 [2024-07-26 06:28:10.406497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.169 [2024-07-26 06:28:10.406530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.169 [2024-07-26 06:28:10.406553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.169 [2024-07-26 06:28:10.406572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.169 [2024-07-26 06:28:10.406612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.169 qpair failed and we were unable to recover it. 00:35:59.169 [2024-07-26 06:28:10.416375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.169 [2024-07-26 06:28:10.416569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.169 [2024-07-26 06:28:10.416602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.169 [2024-07-26 06:28:10.416632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.169 [2024-07-26 06:28:10.416652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.169 [2024-07-26 06:28:10.416692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.169 qpair failed and we were unable to recover it. 00:35:59.169 [2024-07-26 06:28:10.426405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.169 [2024-07-26 06:28:10.426567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.169 [2024-07-26 06:28:10.426605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.169 [2024-07-26 06:28:10.426630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.169 [2024-07-26 06:28:10.426650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.169 [2024-07-26 06:28:10.426690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.169 qpair failed and we were unable to recover it. 00:35:59.169 [2024-07-26 06:28:10.436467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.169 [2024-07-26 06:28:10.436620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.169 [2024-07-26 06:28:10.436654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.169 [2024-07-26 06:28:10.436677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.169 [2024-07-26 06:28:10.436696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.169 [2024-07-26 06:28:10.436736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.169 qpair failed and we were unable to recover it. 00:35:59.169 [2024-07-26 06:28:10.446480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.169 [2024-07-26 06:28:10.446624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.169 [2024-07-26 06:28:10.446658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.169 [2024-07-26 06:28:10.446681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.169 [2024-07-26 06:28:10.446700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.170 [2024-07-26 06:28:10.446740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.170 qpair failed and we were unable to recover it. 00:35:59.170 [2024-07-26 06:28:10.456466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.170 [2024-07-26 06:28:10.456630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.170 [2024-07-26 06:28:10.456662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.170 [2024-07-26 06:28:10.456685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.170 [2024-07-26 06:28:10.456704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.170 [2024-07-26 06:28:10.456744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.170 qpair failed and we were unable to recover it. 00:35:59.170 [2024-07-26 06:28:10.466564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.170 [2024-07-26 06:28:10.466717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.170 [2024-07-26 06:28:10.466751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.170 [2024-07-26 06:28:10.466774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.170 [2024-07-26 06:28:10.466793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.170 [2024-07-26 06:28:10.466833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.170 qpair failed and we were unable to recover it. 00:35:59.170 [2024-07-26 06:28:10.476538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.170 [2024-07-26 06:28:10.476684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.170 [2024-07-26 06:28:10.476718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.170 [2024-07-26 06:28:10.476741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.170 [2024-07-26 06:28:10.476760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.170 [2024-07-26 06:28:10.476800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.170 qpair failed and we were unable to recover it. 00:35:59.170 [2024-07-26 06:28:10.486780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.170 [2024-07-26 06:28:10.486941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.170 [2024-07-26 06:28:10.486976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.170 [2024-07-26 06:28:10.486999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.170 [2024-07-26 06:28:10.487018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.170 [2024-07-26 06:28:10.487067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.170 qpair failed and we were unable to recover it. 00:35:59.170 [2024-07-26 06:28:10.496613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.170 [2024-07-26 06:28:10.496751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.170 [2024-07-26 06:28:10.496784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.170 [2024-07-26 06:28:10.496807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.170 [2024-07-26 06:28:10.496826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.170 [2024-07-26 06:28:10.496867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.170 qpair failed and we were unable to recover it. 00:35:59.429 [2024-07-26 06:28:10.506670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.429 [2024-07-26 06:28:10.506823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.429 [2024-07-26 06:28:10.506856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.429 [2024-07-26 06:28:10.506886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.429 [2024-07-26 06:28:10.506906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.429 [2024-07-26 06:28:10.506946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.429 qpair failed and we were unable to recover it. 00:35:59.429 [2024-07-26 06:28:10.516758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.429 [2024-07-26 06:28:10.516913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.429 [2024-07-26 06:28:10.516948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.429 [2024-07-26 06:28:10.516971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.429 [2024-07-26 06:28:10.516990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.429 [2024-07-26 06:28:10.517030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.429 qpair failed and we were unable to recover it. 00:35:59.429 [2024-07-26 06:28:10.526724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.429 [2024-07-26 06:28:10.526872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.430 [2024-07-26 06:28:10.526906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.430 [2024-07-26 06:28:10.526929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.430 [2024-07-26 06:28:10.526947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.430 [2024-07-26 06:28:10.526987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.430 qpair failed and we were unable to recover it. 00:35:59.430 [2024-07-26 06:28:10.536779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.430 [2024-07-26 06:28:10.536951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.430 [2024-07-26 06:28:10.536984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.430 [2024-07-26 06:28:10.537008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.430 [2024-07-26 06:28:10.537027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.430 [2024-07-26 06:28:10.537087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.430 qpair failed and we were unable to recover it. 00:35:59.430 [2024-07-26 06:28:10.546842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.430 [2024-07-26 06:28:10.546999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.430 [2024-07-26 06:28:10.547033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.430 [2024-07-26 06:28:10.547056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.430 [2024-07-26 06:28:10.547084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.430 [2024-07-26 06:28:10.547125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.430 qpair failed and we were unable to recover it. 00:35:59.430 [2024-07-26 06:28:10.556797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.430 [2024-07-26 06:28:10.556961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.430 [2024-07-26 06:28:10.556994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.430 [2024-07-26 06:28:10.557018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.430 [2024-07-26 06:28:10.557037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.430 [2024-07-26 06:28:10.557085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.430 qpair failed and we were unable to recover it. 00:35:59.430 [2024-07-26 06:28:10.566866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.430 [2024-07-26 06:28:10.567033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.430 [2024-07-26 06:28:10.567073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.430 [2024-07-26 06:28:10.567098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.430 [2024-07-26 06:28:10.567117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.430 [2024-07-26 06:28:10.567158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.430 qpair failed and we were unable to recover it. 00:35:59.430 [2024-07-26 06:28:10.576837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.430 [2024-07-26 06:28:10.577024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.430 [2024-07-26 06:28:10.577064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.430 [2024-07-26 06:28:10.577090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.430 [2024-07-26 06:28:10.577109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.430 [2024-07-26 06:28:10.577156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.430 qpair failed and we were unable to recover it. 00:35:59.430 [2024-07-26 06:28:10.586886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.430 [2024-07-26 06:28:10.587042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.430 [2024-07-26 06:28:10.587082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.430 [2024-07-26 06:28:10.587106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.430 [2024-07-26 06:28:10.587126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.430 [2024-07-26 06:28:10.587167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.430 qpair failed and we were unable to recover it. 00:35:59.430 [2024-07-26 06:28:10.596906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.430 [2024-07-26 06:28:10.597083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.430 [2024-07-26 06:28:10.597131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.430 [2024-07-26 06:28:10.597156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.430 [2024-07-26 06:28:10.597176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.430 [2024-07-26 06:28:10.597217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.430 qpair failed and we were unable to recover it. 00:35:59.430 [2024-07-26 06:28:10.607026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.430 [2024-07-26 06:28:10.607235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.430 [2024-07-26 06:28:10.607269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.430 [2024-07-26 06:28:10.607292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.430 [2024-07-26 06:28:10.607311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.430 [2024-07-26 06:28:10.607352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.430 qpair failed and we were unable to recover it. 00:35:59.430 [2024-07-26 06:28:10.616925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.430 [2024-07-26 06:28:10.617076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.430 [2024-07-26 06:28:10.617110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.430 [2024-07-26 06:28:10.617134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.430 [2024-07-26 06:28:10.617153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.430 [2024-07-26 06:28:10.617192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.430 qpair failed and we were unable to recover it. 00:35:59.430 [2024-07-26 06:28:10.627040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.430 [2024-07-26 06:28:10.627248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.430 [2024-07-26 06:28:10.627282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.430 [2024-07-26 06:28:10.627305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.430 [2024-07-26 06:28:10.627324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.430 [2024-07-26 06:28:10.627365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.430 qpair failed and we were unable to recover it. 00:35:59.430 [2024-07-26 06:28:10.637234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.430 [2024-07-26 06:28:10.637389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.430 [2024-07-26 06:28:10.637426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.430 [2024-07-26 06:28:10.637450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.430 [2024-07-26 06:28:10.637469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.430 [2024-07-26 06:28:10.637515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.430 qpair failed and we were unable to recover it. 00:35:59.430 [2024-07-26 06:28:10.647099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.430 [2024-07-26 06:28:10.647256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.430 [2024-07-26 06:28:10.647289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.430 [2024-07-26 06:28:10.647313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.430 [2024-07-26 06:28:10.647331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.430 [2024-07-26 06:28:10.647372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.430 qpair failed and we were unable to recover it. 00:35:59.430 [2024-07-26 06:28:10.657090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.431 [2024-07-26 06:28:10.657244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.431 [2024-07-26 06:28:10.657278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.431 [2024-07-26 06:28:10.657302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.431 [2024-07-26 06:28:10.657321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.431 [2024-07-26 06:28:10.657361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.431 qpair failed and we were unable to recover it. 00:35:59.431 [2024-07-26 06:28:10.667114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.431 [2024-07-26 06:28:10.667270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.431 [2024-07-26 06:28:10.667303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.431 [2024-07-26 06:28:10.667327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.431 [2024-07-26 06:28:10.667345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.431 [2024-07-26 06:28:10.667386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.431 qpair failed and we were unable to recover it. 00:35:59.431 [2024-07-26 06:28:10.677140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.431 [2024-07-26 06:28:10.677307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.431 [2024-07-26 06:28:10.677341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.431 [2024-07-26 06:28:10.677363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.431 [2024-07-26 06:28:10.677382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.431 [2024-07-26 06:28:10.677422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.431 qpair failed and we were unable to recover it. 00:35:59.431 [2024-07-26 06:28:10.687176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.431 [2024-07-26 06:28:10.687370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.431 [2024-07-26 06:28:10.687409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.431 [2024-07-26 06:28:10.687434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.431 [2024-07-26 06:28:10.687453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.431 [2024-07-26 06:28:10.687494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.431 qpair failed and we were unable to recover it. 00:35:59.431 [2024-07-26 06:28:10.697322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.431 [2024-07-26 06:28:10.697469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.431 [2024-07-26 06:28:10.697502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.431 [2024-07-26 06:28:10.697525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.431 [2024-07-26 06:28:10.697544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.431 [2024-07-26 06:28:10.697584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.431 qpair failed and we were unable to recover it. 00:35:59.431 [2024-07-26 06:28:10.707498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.431 [2024-07-26 06:28:10.707690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.431 [2024-07-26 06:28:10.707724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.431 [2024-07-26 06:28:10.707747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.431 [2024-07-26 06:28:10.707765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.431 [2024-07-26 06:28:10.707805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.431 qpair failed and we were unable to recover it. 00:35:59.431 [2024-07-26 06:28:10.717294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.431 [2024-07-26 06:28:10.717459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.431 [2024-07-26 06:28:10.717493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.431 [2024-07-26 06:28:10.717516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.431 [2024-07-26 06:28:10.717535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.431 [2024-07-26 06:28:10.717589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.431 qpair failed and we were unable to recover it. 00:35:59.431 [2024-07-26 06:28:10.727330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.431 [2024-07-26 06:28:10.727489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.431 [2024-07-26 06:28:10.727522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.431 [2024-07-26 06:28:10.727546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.431 [2024-07-26 06:28:10.727570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.431 [2024-07-26 06:28:10.727612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.431 qpair failed and we were unable to recover it. 00:35:59.431 [2024-07-26 06:28:10.737329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.431 [2024-07-26 06:28:10.737481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.431 [2024-07-26 06:28:10.737514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.431 [2024-07-26 06:28:10.737537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.431 [2024-07-26 06:28:10.737556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.431 [2024-07-26 06:28:10.737596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.431 qpair failed and we were unable to recover it. 00:35:59.431 [2024-07-26 06:28:10.747376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.431 [2024-07-26 06:28:10.747530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.431 [2024-07-26 06:28:10.747564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.431 [2024-07-26 06:28:10.747587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.431 [2024-07-26 06:28:10.747605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.431 [2024-07-26 06:28:10.747645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.431 qpair failed and we were unable to recover it. 00:35:59.431 [2024-07-26 06:28:10.757371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.431 [2024-07-26 06:28:10.757537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.431 [2024-07-26 06:28:10.757570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.431 [2024-07-26 06:28:10.757592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.431 [2024-07-26 06:28:10.757612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.431 [2024-07-26 06:28:10.757651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.431 qpair failed and we were unable to recover it. 00:35:59.693 [2024-07-26 06:28:10.767525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.693 [2024-07-26 06:28:10.767673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.693 [2024-07-26 06:28:10.767707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.693 [2024-07-26 06:28:10.767730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.693 [2024-07-26 06:28:10.767749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.693 [2024-07-26 06:28:10.767789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.693 qpair failed and we were unable to recover it. 00:35:59.693 [2024-07-26 06:28:10.777413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.693 [2024-07-26 06:28:10.777570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.693 [2024-07-26 06:28:10.777609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.693 [2024-07-26 06:28:10.777633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.693 [2024-07-26 06:28:10.777652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.693 [2024-07-26 06:28:10.777692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.693 qpair failed and we were unable to recover it. 00:35:59.693 [2024-07-26 06:28:10.787481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.693 [2024-07-26 06:28:10.787635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.693 [2024-07-26 06:28:10.787669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.693 [2024-07-26 06:28:10.787692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.693 [2024-07-26 06:28:10.787711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.693 [2024-07-26 06:28:10.787752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.693 qpair failed and we were unable to recover it. 00:35:59.693 [2024-07-26 06:28:10.797532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.693 [2024-07-26 06:28:10.797710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.693 [2024-07-26 06:28:10.797743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.693 [2024-07-26 06:28:10.797766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.693 [2024-07-26 06:28:10.797785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.693 [2024-07-26 06:28:10.797826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.693 qpair failed and we were unable to recover it. 00:35:59.693 [2024-07-26 06:28:10.807566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.693 [2024-07-26 06:28:10.807716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.693 [2024-07-26 06:28:10.807749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.693 [2024-07-26 06:28:10.807773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.693 [2024-07-26 06:28:10.807791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.693 [2024-07-26 06:28:10.807832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.693 qpair failed and we were unable to recover it. 00:35:59.693 [2024-07-26 06:28:10.817548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.693 [2024-07-26 06:28:10.817703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.693 [2024-07-26 06:28:10.817736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.693 [2024-07-26 06:28:10.817758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.693 [2024-07-26 06:28:10.817783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.693 [2024-07-26 06:28:10.817824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.693 qpair failed and we were unable to recover it. 00:35:59.693 [2024-07-26 06:28:10.827672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.694 [2024-07-26 06:28:10.827832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.694 [2024-07-26 06:28:10.827865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.694 [2024-07-26 06:28:10.827888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.694 [2024-07-26 06:28:10.827907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.694 [2024-07-26 06:28:10.827947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.694 qpair failed and we were unable to recover it. 00:35:59.694 [2024-07-26 06:28:10.837669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.694 [2024-07-26 06:28:10.837833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.694 [2024-07-26 06:28:10.837867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.694 [2024-07-26 06:28:10.837890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.694 [2024-07-26 06:28:10.837908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.694 [2024-07-26 06:28:10.837948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.694 qpair failed and we were unable to recover it. 00:35:59.694 [2024-07-26 06:28:10.847734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.694 [2024-07-26 06:28:10.847888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.694 [2024-07-26 06:28:10.847922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.694 [2024-07-26 06:28:10.847945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.694 [2024-07-26 06:28:10.847964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.694 [2024-07-26 06:28:10.848018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.694 qpair failed and we were unable to recover it. 00:35:59.694 [2024-07-26 06:28:10.857674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.694 [2024-07-26 06:28:10.857826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.694 [2024-07-26 06:28:10.857860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.694 [2024-07-26 06:28:10.857883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.694 [2024-07-26 06:28:10.857902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.694 [2024-07-26 06:28:10.857942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.694 qpair failed and we were unable to recover it. 00:35:59.694 [2024-07-26 06:28:10.867762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.694 [2024-07-26 06:28:10.867918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.694 [2024-07-26 06:28:10.867951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.694 [2024-07-26 06:28:10.867975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.694 [2024-07-26 06:28:10.867994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.694 [2024-07-26 06:28:10.868034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.694 qpair failed and we were unable to recover it. 00:35:59.694 [2024-07-26 06:28:10.877751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.694 [2024-07-26 06:28:10.877922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.694 [2024-07-26 06:28:10.877955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.694 [2024-07-26 06:28:10.877978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.694 [2024-07-26 06:28:10.877998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.694 [2024-07-26 06:28:10.878037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.694 qpair failed and we were unable to recover it. 00:35:59.694 [2024-07-26 06:28:10.887751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.694 [2024-07-26 06:28:10.887899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.694 [2024-07-26 06:28:10.887932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.694 [2024-07-26 06:28:10.887955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.694 [2024-07-26 06:28:10.887974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.694 [2024-07-26 06:28:10.888014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.694 qpair failed and we were unable to recover it. 00:35:59.694 [2024-07-26 06:28:10.897809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.694 [2024-07-26 06:28:10.897966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.694 [2024-07-26 06:28:10.898004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.694 [2024-07-26 06:28:10.898028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.694 [2024-07-26 06:28:10.898047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.694 [2024-07-26 06:28:10.898095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.694 qpair failed and we were unable to recover it. 00:35:59.694 [2024-07-26 06:28:10.907821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.694 [2024-07-26 06:28:10.907973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.694 [2024-07-26 06:28:10.908007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.694 [2024-07-26 06:28:10.908037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.694 [2024-07-26 06:28:10.908076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.694 [2024-07-26 06:28:10.908120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.694 qpair failed and we were unable to recover it. 00:35:59.694 [2024-07-26 06:28:10.917863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.694 [2024-07-26 06:28:10.918013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.694 [2024-07-26 06:28:10.918053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.694 [2024-07-26 06:28:10.918085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.694 [2024-07-26 06:28:10.918104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.694 [2024-07-26 06:28:10.918145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.694 qpair failed and we were unable to recover it. 00:35:59.694 [2024-07-26 06:28:10.927858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.694 [2024-07-26 06:28:10.927998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.694 [2024-07-26 06:28:10.928030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.694 [2024-07-26 06:28:10.928069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.694 [2024-07-26 06:28:10.928092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.694 [2024-07-26 06:28:10.928133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.694 qpair failed and we were unable to recover it. 00:35:59.694 [2024-07-26 06:28:10.937941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.694 [2024-07-26 06:28:10.938107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.694 [2024-07-26 06:28:10.938141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.694 [2024-07-26 06:28:10.938164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.694 [2024-07-26 06:28:10.938183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.694 [2024-07-26 06:28:10.938224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.694 qpair failed and we were unable to recover it. 00:35:59.694 [2024-07-26 06:28:10.947948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.694 [2024-07-26 06:28:10.948109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.694 [2024-07-26 06:28:10.948142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.694 [2024-07-26 06:28:10.948166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.694 [2024-07-26 06:28:10.948185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.694 [2024-07-26 06:28:10.948225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.694 qpair failed and we were unable to recover it. 00:35:59.695 [2024-07-26 06:28:10.957925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.695 [2024-07-26 06:28:10.958079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.695 [2024-07-26 06:28:10.958112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.695 [2024-07-26 06:28:10.958135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.695 [2024-07-26 06:28:10.958154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.695 [2024-07-26 06:28:10.958194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.695 qpair failed and we were unable to recover it. 00:35:59.695 [2024-07-26 06:28:10.968065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.695 [2024-07-26 06:28:10.968244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.695 [2024-07-26 06:28:10.968277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.695 [2024-07-26 06:28:10.968300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.695 [2024-07-26 06:28:10.968318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.695 [2024-07-26 06:28:10.968381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.695 qpair failed and we were unable to recover it. 00:35:59.695 [2024-07-26 06:28:10.977989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.695 [2024-07-26 06:28:10.978136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.695 [2024-07-26 06:28:10.978169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.695 [2024-07-26 06:28:10.978192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.695 [2024-07-26 06:28:10.978211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.695 [2024-07-26 06:28:10.978251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.695 qpair failed and we were unable to recover it. 00:35:59.695 [2024-07-26 06:28:10.988056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.695 [2024-07-26 06:28:10.988247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.695 [2024-07-26 06:28:10.988279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.695 [2024-07-26 06:28:10.988302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.695 [2024-07-26 06:28:10.988321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.695 [2024-07-26 06:28:10.988360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.695 qpair failed and we were unable to recover it. 00:35:59.695 [2024-07-26 06:28:10.998085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.695 [2024-07-26 06:28:10.998227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.695 [2024-07-26 06:28:10.998265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.695 [2024-07-26 06:28:10.998288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.695 [2024-07-26 06:28:10.998308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.695 [2024-07-26 06:28:10.998349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.695 qpair failed and we were unable to recover it. 00:35:59.695 [2024-07-26 06:28:11.008113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.695 [2024-07-26 06:28:11.008258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.695 [2024-07-26 06:28:11.008291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.695 [2024-07-26 06:28:11.008315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.695 [2024-07-26 06:28:11.008334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.695 [2024-07-26 06:28:11.008374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.695 qpair failed and we were unable to recover it. 00:35:59.695 [2024-07-26 06:28:11.018107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.695 [2024-07-26 06:28:11.018248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.695 [2024-07-26 06:28:11.018281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.695 [2024-07-26 06:28:11.018313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.695 [2024-07-26 06:28:11.018332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.695 [2024-07-26 06:28:11.018372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.695 qpair failed and we were unable to recover it. 00:35:59.954 [2024-07-26 06:28:11.028208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.954 [2024-07-26 06:28:11.028366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.954 [2024-07-26 06:28:11.028399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.955 [2024-07-26 06:28:11.028422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.955 [2024-07-26 06:28:11.028441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.955 [2024-07-26 06:28:11.028481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.955 qpair failed and we were unable to recover it. 00:35:59.955 [2024-07-26 06:28:11.038218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.955 [2024-07-26 06:28:11.038377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.955 [2024-07-26 06:28:11.038410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.955 [2024-07-26 06:28:11.038432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.955 [2024-07-26 06:28:11.038449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.955 [2024-07-26 06:28:11.038494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.955 qpair failed and we were unable to recover it. 00:35:59.955 [2024-07-26 06:28:11.048241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.955 [2024-07-26 06:28:11.048389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.955 [2024-07-26 06:28:11.048422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.955 [2024-07-26 06:28:11.048446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.955 [2024-07-26 06:28:11.048465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.955 [2024-07-26 06:28:11.048505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.955 qpair failed and we were unable to recover it. 00:35:59.955 [2024-07-26 06:28:11.058257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.955 [2024-07-26 06:28:11.058426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.955 [2024-07-26 06:28:11.058461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.955 [2024-07-26 06:28:11.058489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.955 [2024-07-26 06:28:11.058509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.955 [2024-07-26 06:28:11.058550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.955 qpair failed and we were unable to recover it. 00:35:59.955 [2024-07-26 06:28:11.068277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.955 [2024-07-26 06:28:11.068482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.955 [2024-07-26 06:28:11.068516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.955 [2024-07-26 06:28:11.068539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.955 [2024-07-26 06:28:11.068558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.955 [2024-07-26 06:28:11.068598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.955 qpair failed and we were unable to recover it. 00:35:59.955 [2024-07-26 06:28:11.078352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.955 [2024-07-26 06:28:11.078554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.955 [2024-07-26 06:28:11.078588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.955 [2024-07-26 06:28:11.078611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.955 [2024-07-26 06:28:11.078630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.955 [2024-07-26 06:28:11.078669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.955 qpair failed and we were unable to recover it. 00:35:59.955 [2024-07-26 06:28:11.088350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.955 [2024-07-26 06:28:11.088496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.955 [2024-07-26 06:28:11.088534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.955 [2024-07-26 06:28:11.088558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.955 [2024-07-26 06:28:11.088577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.955 [2024-07-26 06:28:11.088618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.955 qpair failed and we were unable to recover it. 00:35:59.955 [2024-07-26 06:28:11.098327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.955 [2024-07-26 06:28:11.098469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.955 [2024-07-26 06:28:11.098503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.955 [2024-07-26 06:28:11.098526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.955 [2024-07-26 06:28:11.098545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.955 [2024-07-26 06:28:11.098586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.955 qpair failed and we were unable to recover it. 00:35:59.955 [2024-07-26 06:28:11.108396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.955 [2024-07-26 06:28:11.108555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.955 [2024-07-26 06:28:11.108588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.955 [2024-07-26 06:28:11.108624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.955 [2024-07-26 06:28:11.108643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.955 [2024-07-26 06:28:11.108684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.955 qpair failed and we were unable to recover it. 00:35:59.955 [2024-07-26 06:28:11.118444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.955 [2024-07-26 06:28:11.118587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.955 [2024-07-26 06:28:11.118621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.955 [2024-07-26 06:28:11.118644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.955 [2024-07-26 06:28:11.118663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.955 [2024-07-26 06:28:11.118703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.955 qpair failed and we were unable to recover it. 00:35:59.955 [2024-07-26 06:28:11.128489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.955 [2024-07-26 06:28:11.128628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.955 [2024-07-26 06:28:11.128660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.955 [2024-07-26 06:28:11.128681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.955 [2024-07-26 06:28:11.128704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.955 [2024-07-26 06:28:11.128744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.955 qpair failed and we were unable to recover it. 00:35:59.955 [2024-07-26 06:28:11.138494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.955 [2024-07-26 06:28:11.138645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.955 [2024-07-26 06:28:11.138679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.955 [2024-07-26 06:28:11.138702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.955 [2024-07-26 06:28:11.138721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.955 [2024-07-26 06:28:11.138761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.955 qpair failed and we were unable to recover it. 00:35:59.955 [2024-07-26 06:28:11.148530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.955 [2024-07-26 06:28:11.148729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.955 [2024-07-26 06:28:11.148762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.955 [2024-07-26 06:28:11.148785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.956 [2024-07-26 06:28:11.148804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.956 [2024-07-26 06:28:11.148844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.956 qpair failed and we were unable to recover it. 00:35:59.956 [2024-07-26 06:28:11.158567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.956 [2024-07-26 06:28:11.158714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.956 [2024-07-26 06:28:11.158747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.956 [2024-07-26 06:28:11.158770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.956 [2024-07-26 06:28:11.158789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.956 [2024-07-26 06:28:11.158842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.956 qpair failed and we were unable to recover it. 00:35:59.956 [2024-07-26 06:28:11.168556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.956 [2024-07-26 06:28:11.168704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.956 [2024-07-26 06:28:11.168742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.956 [2024-07-26 06:28:11.168765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.956 [2024-07-26 06:28:11.168784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.956 [2024-07-26 06:28:11.168825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.956 qpair failed and we were unable to recover it. 00:35:59.956 [2024-07-26 06:28:11.178556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.956 [2024-07-26 06:28:11.178740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.956 [2024-07-26 06:28:11.178773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.956 [2024-07-26 06:28:11.178796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.956 [2024-07-26 06:28:11.178815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.956 [2024-07-26 06:28:11.178855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.956 qpair failed and we were unable to recover it. 00:35:59.956 [2024-07-26 06:28:11.188677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.956 [2024-07-26 06:28:11.188890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.956 [2024-07-26 06:28:11.188928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.956 [2024-07-26 06:28:11.188952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.956 [2024-07-26 06:28:11.188972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.956 [2024-07-26 06:28:11.189026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.956 qpair failed and we were unable to recover it. 00:35:59.956 [2024-07-26 06:28:11.198670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.956 [2024-07-26 06:28:11.198842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.956 [2024-07-26 06:28:11.198875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.956 [2024-07-26 06:28:11.198898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.956 [2024-07-26 06:28:11.198917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.956 [2024-07-26 06:28:11.198957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.956 qpair failed and we were unable to recover it. 00:35:59.956 [2024-07-26 06:28:11.208743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.956 [2024-07-26 06:28:11.208902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.956 [2024-07-26 06:28:11.208936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.956 [2024-07-26 06:28:11.208959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.956 [2024-07-26 06:28:11.208977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.956 [2024-07-26 06:28:11.209017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.956 qpair failed and we were unable to recover it. 00:35:59.956 [2024-07-26 06:28:11.218692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.956 [2024-07-26 06:28:11.218842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.956 [2024-07-26 06:28:11.218876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.956 [2024-07-26 06:28:11.218899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.956 [2024-07-26 06:28:11.218923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.956 [2024-07-26 06:28:11.218964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.956 qpair failed and we were unable to recover it. 00:35:59.956 [2024-07-26 06:28:11.228729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.956 [2024-07-26 06:28:11.228879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.956 [2024-07-26 06:28:11.228912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.956 [2024-07-26 06:28:11.228934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.956 [2024-07-26 06:28:11.228953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.956 [2024-07-26 06:28:11.228993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.956 qpair failed and we were unable to recover it. 00:35:59.956 [2024-07-26 06:28:11.238842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.956 [2024-07-26 06:28:11.239038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.956 [2024-07-26 06:28:11.239082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.956 [2024-07-26 06:28:11.239107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.956 [2024-07-26 06:28:11.239126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.956 [2024-07-26 06:28:11.239180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.956 qpair failed and we were unable to recover it. 00:35:59.956 [2024-07-26 06:28:11.248841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.956 [2024-07-26 06:28:11.248995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.956 [2024-07-26 06:28:11.249028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.956 [2024-07-26 06:28:11.249052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.956 [2024-07-26 06:28:11.249084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.956 [2024-07-26 06:28:11.249126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.956 qpair failed and we were unable to recover it. 00:35:59.956 [2024-07-26 06:28:11.258818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.956 [2024-07-26 06:28:11.258963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.956 [2024-07-26 06:28:11.258996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.956 [2024-07-26 06:28:11.259019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.956 [2024-07-26 06:28:11.259038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.956 [2024-07-26 06:28:11.259087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.956 qpair failed and we were unable to recover it. 00:35:59.956 [2024-07-26 06:28:11.268845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.956 [2024-07-26 06:28:11.269007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.956 [2024-07-26 06:28:11.269040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.957 [2024-07-26 06:28:11.269073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.957 [2024-07-26 06:28:11.269096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.957 [2024-07-26 06:28:11.269137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.957 qpair failed and we were unable to recover it. 00:35:59.957 [2024-07-26 06:28:11.278863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.957 [2024-07-26 06:28:11.279020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.957 [2024-07-26 06:28:11.279053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.957 [2024-07-26 06:28:11.279084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.957 [2024-07-26 06:28:11.279104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:35:59.957 [2024-07-26 06:28:11.279145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:59.957 qpair failed and we were unable to recover it. 00:36:00.216 [2024-07-26 06:28:11.288964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.216 [2024-07-26 06:28:11.289162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.216 [2024-07-26 06:28:11.289196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.216 [2024-07-26 06:28:11.289220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.216 [2024-07-26 06:28:11.289239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.216 [2024-07-26 06:28:11.289293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.216 qpair failed and we were unable to recover it. 00:36:00.216 [2024-07-26 06:28:11.298929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.216 [2024-07-26 06:28:11.299087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.216 [2024-07-26 06:28:11.299121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.216 [2024-07-26 06:28:11.299144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.216 [2024-07-26 06:28:11.299162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.216 [2024-07-26 06:28:11.299203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.216 qpair failed and we were unable to recover it. 00:36:00.216 [2024-07-26 06:28:11.308931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.216 [2024-07-26 06:28:11.309094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.216 [2024-07-26 06:28:11.309128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.216 [2024-07-26 06:28:11.309158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.216 [2024-07-26 06:28:11.309178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.216 [2024-07-26 06:28:11.309218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.216 qpair failed and we were unable to recover it. 00:36:00.216 [2024-07-26 06:28:11.318972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.216 [2024-07-26 06:28:11.319130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.216 [2024-07-26 06:28:11.319164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.216 [2024-07-26 06:28:11.319187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.216 [2024-07-26 06:28:11.319206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.216 [2024-07-26 06:28:11.319246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.216 qpair failed and we were unable to recover it. 00:36:00.216 [2024-07-26 06:28:11.329045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.216 [2024-07-26 06:28:11.329244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.216 [2024-07-26 06:28:11.329276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.216 [2024-07-26 06:28:11.329298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.216 [2024-07-26 06:28:11.329317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.216 [2024-07-26 06:28:11.329358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.216 qpair failed and we were unable to recover it. 00:36:00.216 [2024-07-26 06:28:11.339041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.216 [2024-07-26 06:28:11.339203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.216 [2024-07-26 06:28:11.339237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.216 [2024-07-26 06:28:11.339259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.216 [2024-07-26 06:28:11.339278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.216 [2024-07-26 06:28:11.339319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.216 qpair failed and we were unable to recover it. 00:36:00.216 [2024-07-26 06:28:11.349135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.216 [2024-07-26 06:28:11.349294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.216 [2024-07-26 06:28:11.349327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.216 [2024-07-26 06:28:11.349350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.216 [2024-07-26 06:28:11.349368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.216 [2024-07-26 06:28:11.349408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.216 qpair failed and we were unable to recover it. 00:36:00.216 [2024-07-26 06:28:11.359111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.216 [2024-07-26 06:28:11.359264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.216 [2024-07-26 06:28:11.359297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.216 [2024-07-26 06:28:11.359320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.216 [2024-07-26 06:28:11.359339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.216 [2024-07-26 06:28:11.359385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.216 qpair failed and we were unable to recover it. 00:36:00.216 [2024-07-26 06:28:11.369192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.216 [2024-07-26 06:28:11.369335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.216 [2024-07-26 06:28:11.369372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.216 [2024-07-26 06:28:11.369398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.216 [2024-07-26 06:28:11.369417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.216 [2024-07-26 06:28:11.369473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.216 qpair failed and we were unable to recover it. 00:36:00.217 [2024-07-26 06:28:11.379169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.217 [2024-07-26 06:28:11.379316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.217 [2024-07-26 06:28:11.379350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.217 [2024-07-26 06:28:11.379373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.217 [2024-07-26 06:28:11.379392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.217 [2024-07-26 06:28:11.379433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.217 qpair failed and we were unable to recover it. 00:36:00.217 [2024-07-26 06:28:11.389164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.217 [2024-07-26 06:28:11.389321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.217 [2024-07-26 06:28:11.389354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.217 [2024-07-26 06:28:11.389377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.217 [2024-07-26 06:28:11.389396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.217 [2024-07-26 06:28:11.389436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.217 qpair failed and we were unable to recover it. 00:36:00.217 [2024-07-26 06:28:11.399301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.217 [2024-07-26 06:28:11.399451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.217 [2024-07-26 06:28:11.399490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.217 [2024-07-26 06:28:11.399514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.217 [2024-07-26 06:28:11.399532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.217 [2024-07-26 06:28:11.399572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.217 qpair failed and we were unable to recover it. 00:36:00.217 [2024-07-26 06:28:11.409255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.217 [2024-07-26 06:28:11.409445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.217 [2024-07-26 06:28:11.409479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.217 [2024-07-26 06:28:11.409501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.217 [2024-07-26 06:28:11.409520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.217 [2024-07-26 06:28:11.409560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.217 qpair failed and we were unable to recover it. 00:36:00.217 [2024-07-26 06:28:11.419229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.217 [2024-07-26 06:28:11.419374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.217 [2024-07-26 06:28:11.419407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.217 [2024-07-26 06:28:11.419430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.217 [2024-07-26 06:28:11.419449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.217 [2024-07-26 06:28:11.419489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.217 qpair failed and we were unable to recover it. 00:36:00.217 [2024-07-26 06:28:11.429299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.217 [2024-07-26 06:28:11.429457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.217 [2024-07-26 06:28:11.429489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.217 [2024-07-26 06:28:11.429513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.217 [2024-07-26 06:28:11.429531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.217 [2024-07-26 06:28:11.429572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.217 qpair failed and we were unable to recover it. 00:36:00.217 [2024-07-26 06:28:11.439310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.217 [2024-07-26 06:28:11.439466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.217 [2024-07-26 06:28:11.439500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.217 [2024-07-26 06:28:11.439527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.217 [2024-07-26 06:28:11.439547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.217 [2024-07-26 06:28:11.439592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.217 qpair failed and we were unable to recover it. 00:36:00.217 [2024-07-26 06:28:11.449422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.217 [2024-07-26 06:28:11.449597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.217 [2024-07-26 06:28:11.449630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.217 [2024-07-26 06:28:11.449653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.217 [2024-07-26 06:28:11.449672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.217 [2024-07-26 06:28:11.449712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.217 qpair failed and we were unable to recover it. 00:36:00.217 [2024-07-26 06:28:11.459393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.217 [2024-07-26 06:28:11.459541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.217 [2024-07-26 06:28:11.459574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.217 [2024-07-26 06:28:11.459597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.217 [2024-07-26 06:28:11.459617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.217 [2024-07-26 06:28:11.459657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.217 qpair failed and we were unable to recover it. 00:36:00.217 [2024-07-26 06:28:11.469411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.217 [2024-07-26 06:28:11.469580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.217 [2024-07-26 06:28:11.469613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.217 [2024-07-26 06:28:11.469636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.217 [2024-07-26 06:28:11.469656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.217 [2024-07-26 06:28:11.469696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.217 qpair failed and we were unable to recover it. 00:36:00.217 [2024-07-26 06:28:11.479471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.217 [2024-07-26 06:28:11.479629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.217 [2024-07-26 06:28:11.479665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.217 [2024-07-26 06:28:11.479689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.217 [2024-07-26 06:28:11.479708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.217 [2024-07-26 06:28:11.479747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.217 qpair failed and we were unable to recover it. 00:36:00.217 [2024-07-26 06:28:11.489458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.217 [2024-07-26 06:28:11.489610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.217 [2024-07-26 06:28:11.489648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.217 [2024-07-26 06:28:11.489672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.217 [2024-07-26 06:28:11.489691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.217 [2024-07-26 06:28:11.489731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.217 qpair failed and we were unable to recover it. 00:36:00.217 [2024-07-26 06:28:11.499504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.217 [2024-07-26 06:28:11.499658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.217 [2024-07-26 06:28:11.499690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.217 [2024-07-26 06:28:11.499713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.217 [2024-07-26 06:28:11.499732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.217 [2024-07-26 06:28:11.499772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.217 qpair failed and we were unable to recover it. 00:36:00.217 [2024-07-26 06:28:11.509542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.218 [2024-07-26 06:28:11.509700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.218 [2024-07-26 06:28:11.509733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.218 [2024-07-26 06:28:11.509756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.218 [2024-07-26 06:28:11.509774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.218 [2024-07-26 06:28:11.509816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.218 qpair failed and we were unable to recover it. 00:36:00.218 [2024-07-26 06:28:11.519769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.218 [2024-07-26 06:28:11.519922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.218 [2024-07-26 06:28:11.519956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.218 [2024-07-26 06:28:11.519978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.218 [2024-07-26 06:28:11.519997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.218 [2024-07-26 06:28:11.520037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.218 qpair failed and we were unable to recover it. 00:36:00.218 [2024-07-26 06:28:11.529623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.218 [2024-07-26 06:28:11.529798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.218 [2024-07-26 06:28:11.529831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.218 [2024-07-26 06:28:11.529855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.218 [2024-07-26 06:28:11.529873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.218 [2024-07-26 06:28:11.529919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.218 qpair failed and we were unable to recover it. 00:36:00.218 [2024-07-26 06:28:11.539641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.218 [2024-07-26 06:28:11.539785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.218 [2024-07-26 06:28:11.539818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.218 [2024-07-26 06:28:11.539841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.218 [2024-07-26 06:28:11.539859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.218 [2024-07-26 06:28:11.539900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.218 qpair failed and we were unable to recover it. 00:36:00.477 [2024-07-26 06:28:11.549692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.477 [2024-07-26 06:28:11.549850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.477 [2024-07-26 06:28:11.549884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.477 [2024-07-26 06:28:11.549907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.477 [2024-07-26 06:28:11.549926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.477 [2024-07-26 06:28:11.549979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.477 qpair failed and we were unable to recover it. 00:36:00.477 [2024-07-26 06:28:11.559666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.477 [2024-07-26 06:28:11.559814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.477 [2024-07-26 06:28:11.559853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.477 [2024-07-26 06:28:11.559876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.477 [2024-07-26 06:28:11.559894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.477 [2024-07-26 06:28:11.559934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.477 qpair failed and we were unable to recover it. 00:36:00.477 [2024-07-26 06:28:11.569746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.477 [2024-07-26 06:28:11.569898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.477 [2024-07-26 06:28:11.569931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.477 [2024-07-26 06:28:11.569954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.477 [2024-07-26 06:28:11.569973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.477 [2024-07-26 06:28:11.570013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.477 qpair failed and we were unable to recover it. 00:36:00.478 [2024-07-26 06:28:11.579767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.478 [2024-07-26 06:28:11.579921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.478 [2024-07-26 06:28:11.579955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.478 [2024-07-26 06:28:11.579979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.478 [2024-07-26 06:28:11.579998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.478 [2024-07-26 06:28:11.580052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.478 qpair failed and we were unable to recover it. 00:36:00.478 [2024-07-26 06:28:11.589774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.478 [2024-07-26 06:28:11.589933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.478 [2024-07-26 06:28:11.589966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.478 [2024-07-26 06:28:11.589990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.478 [2024-07-26 06:28:11.590009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.478 [2024-07-26 06:28:11.590049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.478 qpair failed and we were unable to recover it. 00:36:00.478 [2024-07-26 06:28:11.599756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.478 [2024-07-26 06:28:11.599907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.478 [2024-07-26 06:28:11.599941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.478 [2024-07-26 06:28:11.599964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.478 [2024-07-26 06:28:11.599983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.478 [2024-07-26 06:28:11.600023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.478 qpair failed and we were unable to recover it. 00:36:00.478 [2024-07-26 06:28:11.609859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.478 [2024-07-26 06:28:11.610011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.478 [2024-07-26 06:28:11.610043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.478 [2024-07-26 06:28:11.610074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.478 [2024-07-26 06:28:11.610096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.478 [2024-07-26 06:28:11.610138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.478 qpair failed and we were unable to recover it. 00:36:00.478 [2024-07-26 06:28:11.620055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.478 [2024-07-26 06:28:11.620207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.478 [2024-07-26 06:28:11.620241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.478 [2024-07-26 06:28:11.620264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.478 [2024-07-26 06:28:11.620301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.478 [2024-07-26 06:28:11.620344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.478 qpair failed and we were unable to recover it. 00:36:00.478 [2024-07-26 06:28:11.629888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.478 [2024-07-26 06:28:11.630051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.478 [2024-07-26 06:28:11.630092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.478 [2024-07-26 06:28:11.630116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.478 [2024-07-26 06:28:11.630135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.478 [2024-07-26 06:28:11.630176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.478 qpair failed and we were unable to recover it. 00:36:00.478 [2024-07-26 06:28:11.639945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.478 [2024-07-26 06:28:11.640109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.478 [2024-07-26 06:28:11.640143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.478 [2024-07-26 06:28:11.640167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.478 [2024-07-26 06:28:11.640186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.478 [2024-07-26 06:28:11.640226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.478 qpair failed and we were unable to recover it. 00:36:00.478 [2024-07-26 06:28:11.650005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.478 [2024-07-26 06:28:11.650161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.478 [2024-07-26 06:28:11.650194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.478 [2024-07-26 06:28:11.650218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.478 [2024-07-26 06:28:11.650236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.478 [2024-07-26 06:28:11.650277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.478 qpair failed and we were unable to recover it. 00:36:00.478 [2024-07-26 06:28:11.659960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.478 [2024-07-26 06:28:11.660153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.478 [2024-07-26 06:28:11.660188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.478 [2024-07-26 06:28:11.660221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.478 [2024-07-26 06:28:11.660241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.478 [2024-07-26 06:28:11.660283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.478 qpair failed and we were unable to recover it. 00:36:00.478 [2024-07-26 06:28:11.670045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.478 [2024-07-26 06:28:11.670224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.478 [2024-07-26 06:28:11.670258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.478 [2024-07-26 06:28:11.670281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.478 [2024-07-26 06:28:11.670301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.478 [2024-07-26 06:28:11.670341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.478 qpair failed and we were unable to recover it. 00:36:00.478 [2024-07-26 06:28:11.680137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.478 [2024-07-26 06:28:11.680300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.478 [2024-07-26 06:28:11.680334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.478 [2024-07-26 06:28:11.680356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.478 [2024-07-26 06:28:11.680375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.478 [2024-07-26 06:28:11.680415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.478 qpair failed and we were unable to recover it. 00:36:00.478 [2024-07-26 06:28:11.690070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.478 [2024-07-26 06:28:11.690214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.478 [2024-07-26 06:28:11.690247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.478 [2024-07-26 06:28:11.690270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.478 [2024-07-26 06:28:11.690288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.478 [2024-07-26 06:28:11.690329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.478 qpair failed and we were unable to recover it. 00:36:00.478 [2024-07-26 06:28:11.700094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.478 [2024-07-26 06:28:11.700248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.478 [2024-07-26 06:28:11.700281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.478 [2024-07-26 06:28:11.700303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.478 [2024-07-26 06:28:11.700323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.478 [2024-07-26 06:28:11.700363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.478 qpair failed and we were unable to recover it. 00:36:00.478 [2024-07-26 06:28:11.710125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.479 [2024-07-26 06:28:11.710281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.479 [2024-07-26 06:28:11.710315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.479 [2024-07-26 06:28:11.710347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.479 [2024-07-26 06:28:11.710367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.479 [2024-07-26 06:28:11.710408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.479 qpair failed and we were unable to recover it. 00:36:00.479 [2024-07-26 06:28:11.720219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.479 [2024-07-26 06:28:11.720385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.479 [2024-07-26 06:28:11.720419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.479 [2024-07-26 06:28:11.720442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.479 [2024-07-26 06:28:11.720461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.479 [2024-07-26 06:28:11.720501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.479 qpair failed and we were unable to recover it. 00:36:00.479 [2024-07-26 06:28:11.730242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.479 [2024-07-26 06:28:11.730390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.479 [2024-07-26 06:28:11.730423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.479 [2024-07-26 06:28:11.730447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.479 [2024-07-26 06:28:11.730466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.479 [2024-07-26 06:28:11.730506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.479 qpair failed and we were unable to recover it. 00:36:00.479 [2024-07-26 06:28:11.740199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.479 [2024-07-26 06:28:11.740373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.479 [2024-07-26 06:28:11.740419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.479 [2024-07-26 06:28:11.740442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.479 [2024-07-26 06:28:11.740460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.479 [2024-07-26 06:28:11.740501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.479 qpair failed and we were unable to recover it. 00:36:00.479 [2024-07-26 06:28:11.750253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.479 [2024-07-26 06:28:11.750407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.479 [2024-07-26 06:28:11.750441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.479 [2024-07-26 06:28:11.750464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.479 [2024-07-26 06:28:11.750483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.479 [2024-07-26 06:28:11.750529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.479 qpair failed and we were unable to recover it. 00:36:00.479 [2024-07-26 06:28:11.760259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.479 [2024-07-26 06:28:11.760423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.479 [2024-07-26 06:28:11.760456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.479 [2024-07-26 06:28:11.760479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.479 [2024-07-26 06:28:11.760498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.479 [2024-07-26 06:28:11.760538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.479 qpair failed and we were unable to recover it. 00:36:00.479 [2024-07-26 06:28:11.770349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.479 [2024-07-26 06:28:11.770504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.479 [2024-07-26 06:28:11.770538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.479 [2024-07-26 06:28:11.770561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.479 [2024-07-26 06:28:11.770580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.479 [2024-07-26 06:28:11.770620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.479 qpair failed and we were unable to recover it. 00:36:00.479 [2024-07-26 06:28:11.780323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.479 [2024-07-26 06:28:11.780473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.479 [2024-07-26 06:28:11.780506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.479 [2024-07-26 06:28:11.780529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.479 [2024-07-26 06:28:11.780548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.479 [2024-07-26 06:28:11.780588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.479 qpair failed and we were unable to recover it. 00:36:00.479 [2024-07-26 06:28:11.790337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.479 [2024-07-26 06:28:11.790496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.479 [2024-07-26 06:28:11.790529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.479 [2024-07-26 06:28:11.790552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.479 [2024-07-26 06:28:11.790571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.479 [2024-07-26 06:28:11.790610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.479 qpair failed and we were unable to recover it. 00:36:00.479 [2024-07-26 06:28:11.800425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.479 [2024-07-26 06:28:11.800575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.479 [2024-07-26 06:28:11.800609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.479 [2024-07-26 06:28:11.800636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.479 [2024-07-26 06:28:11.800657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.479 [2024-07-26 06:28:11.800698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.479 qpair failed and we were unable to recover it. 00:36:00.740 [2024-07-26 06:28:11.810437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.740 [2024-07-26 06:28:11.810592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.740 [2024-07-26 06:28:11.810625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.740 [2024-07-26 06:28:11.810649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.740 [2024-07-26 06:28:11.810668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.740 [2024-07-26 06:28:11.810709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.740 qpair failed and we were unable to recover it. 00:36:00.740 [2024-07-26 06:28:11.820507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.740 [2024-07-26 06:28:11.820677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.740 [2024-07-26 06:28:11.820711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.740 [2024-07-26 06:28:11.820733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.740 [2024-07-26 06:28:11.820752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.740 [2024-07-26 06:28:11.820793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.740 qpair failed and we were unable to recover it. 00:36:00.740 [2024-07-26 06:28:11.830508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.740 [2024-07-26 06:28:11.830666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.740 [2024-07-26 06:28:11.830700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.740 [2024-07-26 06:28:11.830723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.740 [2024-07-26 06:28:11.830742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.740 [2024-07-26 06:28:11.830796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.740 qpair failed and we were unable to recover it. 00:36:00.740 [2024-07-26 06:28:11.840546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.740 [2024-07-26 06:28:11.840761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.740 [2024-07-26 06:28:11.840795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.740 [2024-07-26 06:28:11.840818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.740 [2024-07-26 06:28:11.840836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.740 [2024-07-26 06:28:11.840876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.740 qpair failed and we were unable to recover it. 00:36:00.740 [2024-07-26 06:28:11.850529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.740 [2024-07-26 06:28:11.850700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.740 [2024-07-26 06:28:11.850734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.740 [2024-07-26 06:28:11.850757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.740 [2024-07-26 06:28:11.850776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.740 [2024-07-26 06:28:11.850815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.740 qpair failed and we were unable to recover it. 00:36:00.740 [2024-07-26 06:28:11.860614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.740 [2024-07-26 06:28:11.860823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.740 [2024-07-26 06:28:11.860865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.740 [2024-07-26 06:28:11.860889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.740 [2024-07-26 06:28:11.860914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.740 [2024-07-26 06:28:11.860954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.740 qpair failed and we were unable to recover it. 00:36:00.740 [2024-07-26 06:28:11.870598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.740 [2024-07-26 06:28:11.870752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.740 [2024-07-26 06:28:11.870785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.740 [2024-07-26 06:28:11.870808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.740 [2024-07-26 06:28:11.870826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.740 [2024-07-26 06:28:11.870866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.740 qpair failed and we were unable to recover it. 00:36:00.740 [2024-07-26 06:28:11.880725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.740 [2024-07-26 06:28:11.880935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.740 [2024-07-26 06:28:11.880980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.740 [2024-07-26 06:28:11.881004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.740 [2024-07-26 06:28:11.881023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.740 [2024-07-26 06:28:11.881071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.740 qpair failed and we were unable to recover it. 00:36:00.740 [2024-07-26 06:28:11.890680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.740 [2024-07-26 06:28:11.890828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.741 [2024-07-26 06:28:11.890867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.741 [2024-07-26 06:28:11.890891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.741 [2024-07-26 06:28:11.890911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.741 [2024-07-26 06:28:11.890951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.741 qpair failed and we were unable to recover it. 00:36:00.741 [2024-07-26 06:28:11.900666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.741 [2024-07-26 06:28:11.900813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.741 [2024-07-26 06:28:11.900845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.741 [2024-07-26 06:28:11.900866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.741 [2024-07-26 06:28:11.900883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.741 [2024-07-26 06:28:11.900922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.741 qpair failed and we were unable to recover it. 00:36:00.741 [2024-07-26 06:28:11.910800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.741 [2024-07-26 06:28:11.910986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.741 [2024-07-26 06:28:11.911023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.741 [2024-07-26 06:28:11.911047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.741 [2024-07-26 06:28:11.911076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.741 [2024-07-26 06:28:11.911118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.741 qpair failed and we were unable to recover it. 00:36:00.741 [2024-07-26 06:28:11.920736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.741 [2024-07-26 06:28:11.920884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.741 [2024-07-26 06:28:11.920917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.741 [2024-07-26 06:28:11.920940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.741 [2024-07-26 06:28:11.920958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.741 [2024-07-26 06:28:11.920998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.741 qpair failed and we were unable to recover it. 00:36:00.741 [2024-07-26 06:28:11.930820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.741 [2024-07-26 06:28:11.930989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.741 [2024-07-26 06:28:11.931024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.741 [2024-07-26 06:28:11.931051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.741 [2024-07-26 06:28:11.931081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.741 [2024-07-26 06:28:11.931147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.741 qpair failed and we were unable to recover it. 00:36:00.741 [2024-07-26 06:28:11.940774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.741 [2024-07-26 06:28:11.940918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.741 [2024-07-26 06:28:11.940952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.741 [2024-07-26 06:28:11.940975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.741 [2024-07-26 06:28:11.940993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.741 [2024-07-26 06:28:11.941033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.741 qpair failed and we were unable to recover it. 00:36:00.741 [2024-07-26 06:28:11.950787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.741 [2024-07-26 06:28:11.950936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.741 [2024-07-26 06:28:11.950975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.741 [2024-07-26 06:28:11.950997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.741 [2024-07-26 06:28:11.951015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.741 [2024-07-26 06:28:11.951054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.741 qpair failed and we were unable to recover it. 00:36:00.741 [2024-07-26 06:28:11.960866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.741 [2024-07-26 06:28:11.961029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.741 [2024-07-26 06:28:11.961070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.741 [2024-07-26 06:28:11.961106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.741 [2024-07-26 06:28:11.961124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.741 [2024-07-26 06:28:11.961163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.741 qpair failed and we were unable to recover it. 00:36:00.741 [2024-07-26 06:28:11.970957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.741 [2024-07-26 06:28:11.971115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.741 [2024-07-26 06:28:11.971148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.741 [2024-07-26 06:28:11.971171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.741 [2024-07-26 06:28:11.971188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.741 [2024-07-26 06:28:11.971242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.741 qpair failed and we were unable to recover it. 00:36:00.741 [2024-07-26 06:28:11.980929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.741 [2024-07-26 06:28:11.981124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.741 [2024-07-26 06:28:11.981169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.741 [2024-07-26 06:28:11.981192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.741 [2024-07-26 06:28:11.981209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.741 [2024-07-26 06:28:11.981260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.741 qpair failed and we were unable to recover it. 00:36:00.741 [2024-07-26 06:28:11.990988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.741 [2024-07-26 06:28:11.991180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.741 [2024-07-26 06:28:11.991213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.741 [2024-07-26 06:28:11.991235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.741 [2024-07-26 06:28:11.991253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.741 [2024-07-26 06:28:11.991292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.741 qpair failed and we were unable to recover it. 00:36:00.741 [2024-07-26 06:28:12.000975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.741 [2024-07-26 06:28:12.001133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.741 [2024-07-26 06:28:12.001167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.741 [2024-07-26 06:28:12.001189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.741 [2024-07-26 06:28:12.001207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.741 [2024-07-26 06:28:12.001246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.741 qpair failed and we were unable to recover it. 00:36:00.741 [2024-07-26 06:28:12.011026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.741 [2024-07-26 06:28:12.011205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.741 [2024-07-26 06:28:12.011239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.741 [2024-07-26 06:28:12.011262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.741 [2024-07-26 06:28:12.011279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.741 [2024-07-26 06:28:12.011319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.741 qpair failed and we were unable to recover it. 00:36:00.741 [2024-07-26 06:28:12.021013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.741 [2024-07-26 06:28:12.021162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.742 [2024-07-26 06:28:12.021196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.742 [2024-07-26 06:28:12.021218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.742 [2024-07-26 06:28:12.021241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.742 [2024-07-26 06:28:12.021282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.742 qpair failed and we were unable to recover it. 00:36:00.742 [2024-07-26 06:28:12.031181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.742 [2024-07-26 06:28:12.031350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.742 [2024-07-26 06:28:12.031383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.742 [2024-07-26 06:28:12.031406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.742 [2024-07-26 06:28:12.031423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.742 [2024-07-26 06:28:12.031475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.742 qpair failed and we were unable to recover it. 00:36:00.742 [2024-07-26 06:28:12.041179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.742 [2024-07-26 06:28:12.041342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.742 [2024-07-26 06:28:12.041376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.742 [2024-07-26 06:28:12.041399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.742 [2024-07-26 06:28:12.041417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.742 [2024-07-26 06:28:12.041456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.742 qpair failed and we were unable to recover it. 00:36:00.742 [2024-07-26 06:28:12.051143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.742 [2024-07-26 06:28:12.051295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.742 [2024-07-26 06:28:12.051329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.742 [2024-07-26 06:28:12.051351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.742 [2024-07-26 06:28:12.051376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.742 [2024-07-26 06:28:12.051415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.742 qpair failed and we were unable to recover it. 00:36:00.742 [2024-07-26 06:28:12.061141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.742 [2024-07-26 06:28:12.061302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.742 [2024-07-26 06:28:12.061334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.742 [2024-07-26 06:28:12.061356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.742 [2024-07-26 06:28:12.061375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.742 [2024-07-26 06:28:12.061413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.742 qpair failed and we were unable to recover it. 00:36:00.742 [2024-07-26 06:28:12.071200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.742 [2024-07-26 06:28:12.071363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.742 [2024-07-26 06:28:12.071397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.742 [2024-07-26 06:28:12.071419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.742 [2024-07-26 06:28:12.071437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:00.742 [2024-07-26 06:28:12.071477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.742 qpair failed and we were unable to recover it. 00:36:01.004 [2024-07-26 06:28:12.081207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.004 [2024-07-26 06:28:12.081408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.004 [2024-07-26 06:28:12.081442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.004 [2024-07-26 06:28:12.081464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.004 [2024-07-26 06:28:12.081482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.004 [2024-07-26 06:28:12.081521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.004 qpair failed and we were unable to recover it. 00:36:01.004 [2024-07-26 06:28:12.091273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.004 [2024-07-26 06:28:12.091423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.004 [2024-07-26 06:28:12.091457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.004 [2024-07-26 06:28:12.091479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.004 [2024-07-26 06:28:12.091497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.004 [2024-07-26 06:28:12.091536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.004 qpair failed and we were unable to recover it. 00:36:01.004 [2024-07-26 06:28:12.101310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.004 [2024-07-26 06:28:12.101458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.004 [2024-07-26 06:28:12.101491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.004 [2024-07-26 06:28:12.101513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.004 [2024-07-26 06:28:12.101530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.004 [2024-07-26 06:28:12.101569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.004 qpair failed and we were unable to recover it. 00:36:01.004 [2024-07-26 06:28:12.111348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.004 [2024-07-26 06:28:12.111507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.004 [2024-07-26 06:28:12.111542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.004 [2024-07-26 06:28:12.111570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.004 [2024-07-26 06:28:12.111589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.004 [2024-07-26 06:28:12.111641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.004 qpair failed and we were unable to recover it. 00:36:01.004 [2024-07-26 06:28:12.121337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.004 [2024-07-26 06:28:12.121489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.004 [2024-07-26 06:28:12.121522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.004 [2024-07-26 06:28:12.121545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.004 [2024-07-26 06:28:12.121562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.004 [2024-07-26 06:28:12.121602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.004 qpair failed and we were unable to recover it. 00:36:01.004 [2024-07-26 06:28:12.131392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.004 [2024-07-26 06:28:12.131551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.004 [2024-07-26 06:28:12.131583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.004 [2024-07-26 06:28:12.131605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.004 [2024-07-26 06:28:12.131623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.004 [2024-07-26 06:28:12.131690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.004 qpair failed and we were unable to recover it. 00:36:01.004 [2024-07-26 06:28:12.141368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.004 [2024-07-26 06:28:12.141508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.004 [2024-07-26 06:28:12.141541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.004 [2024-07-26 06:28:12.141563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.004 [2024-07-26 06:28:12.141580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.004 [2024-07-26 06:28:12.141625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.004 qpair failed and we were unable to recover it. 00:36:01.004 [2024-07-26 06:28:12.151472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.004 [2024-07-26 06:28:12.151689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.004 [2024-07-26 06:28:12.151724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.004 [2024-07-26 06:28:12.151746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.004 [2024-07-26 06:28:12.151764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.004 [2024-07-26 06:28:12.151804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.004 qpair failed and we were unable to recover it. 00:36:01.004 [2024-07-26 06:28:12.161471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.004 [2024-07-26 06:28:12.161635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.004 [2024-07-26 06:28:12.161669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.004 [2024-07-26 06:28:12.161692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.004 [2024-07-26 06:28:12.161711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.004 [2024-07-26 06:28:12.161750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.004 qpair failed and we were unable to recover it. 00:36:01.004 [2024-07-26 06:28:12.171680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.004 [2024-07-26 06:28:12.171843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.004 [2024-07-26 06:28:12.171877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.004 [2024-07-26 06:28:12.171899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.004 [2024-07-26 06:28:12.171916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.004 [2024-07-26 06:28:12.171956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.004 qpair failed and we were unable to recover it. 00:36:01.004 [2024-07-26 06:28:12.181532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.004 [2024-07-26 06:28:12.181688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.004 [2024-07-26 06:28:12.181722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.004 [2024-07-26 06:28:12.181744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.004 [2024-07-26 06:28:12.181762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.004 [2024-07-26 06:28:12.181801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.004 qpair failed and we were unable to recover it. 00:36:01.004 [2024-07-26 06:28:12.191605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.004 [2024-07-26 06:28:12.191805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.004 [2024-07-26 06:28:12.191838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.004 [2024-07-26 06:28:12.191860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.004 [2024-07-26 06:28:12.191878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.004 [2024-07-26 06:28:12.191917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.004 qpair failed and we were unable to recover it. 00:36:01.004 [2024-07-26 06:28:12.201576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.005 [2024-07-26 06:28:12.201724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.005 [2024-07-26 06:28:12.201757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.005 [2024-07-26 06:28:12.201784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.005 [2024-07-26 06:28:12.201803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.005 [2024-07-26 06:28:12.201842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.005 qpair failed and we were unable to recover it. 00:36:01.005 [2024-07-26 06:28:12.211652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.005 [2024-07-26 06:28:12.211834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.005 [2024-07-26 06:28:12.211868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.005 [2024-07-26 06:28:12.211891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.005 [2024-07-26 06:28:12.211909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.005 [2024-07-26 06:28:12.211962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.005 qpair failed and we were unable to recover it. 00:36:01.005 [2024-07-26 06:28:12.221666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.005 [2024-07-26 06:28:12.221816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.005 [2024-07-26 06:28:12.221850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.005 [2024-07-26 06:28:12.221872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.005 [2024-07-26 06:28:12.221890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.005 [2024-07-26 06:28:12.221929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.005 qpair failed and we were unable to recover it. 00:36:01.005 [2024-07-26 06:28:12.231694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.005 [2024-07-26 06:28:12.231845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.005 [2024-07-26 06:28:12.231879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.005 [2024-07-26 06:28:12.231901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.005 [2024-07-26 06:28:12.231919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.005 [2024-07-26 06:28:12.231972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.005 qpair failed and we were unable to recover it. 00:36:01.005 [2024-07-26 06:28:12.241664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.005 [2024-07-26 06:28:12.241822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.005 [2024-07-26 06:28:12.241855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.005 [2024-07-26 06:28:12.241878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.005 [2024-07-26 06:28:12.241896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.005 [2024-07-26 06:28:12.241935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.005 qpair failed and we were unable to recover it. 00:36:01.005 [2024-07-26 06:28:12.251742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.005 [2024-07-26 06:28:12.251888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.005 [2024-07-26 06:28:12.251921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.005 [2024-07-26 06:28:12.251944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.005 [2024-07-26 06:28:12.251962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.005 [2024-07-26 06:28:12.252001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.005 qpair failed and we were unable to recover it. 00:36:01.005 [2024-07-26 06:28:12.261776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.005 [2024-07-26 06:28:12.261934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.005 [2024-07-26 06:28:12.261968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.005 [2024-07-26 06:28:12.261990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.005 [2024-07-26 06:28:12.262008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.005 [2024-07-26 06:28:12.262048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.005 qpair failed and we were unable to recover it. 00:36:01.005 [2024-07-26 06:28:12.271788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.005 [2024-07-26 06:28:12.271933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.005 [2024-07-26 06:28:12.271966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.005 [2024-07-26 06:28:12.271988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.005 [2024-07-26 06:28:12.272007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.005 [2024-07-26 06:28:12.272047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.005 qpair failed and we were unable to recover it. 00:36:01.005 [2024-07-26 06:28:12.281797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.005 [2024-07-26 06:28:12.281944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.005 [2024-07-26 06:28:12.281977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.005 [2024-07-26 06:28:12.282000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.005 [2024-07-26 06:28:12.282018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.005 [2024-07-26 06:28:12.282057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.005 qpair failed and we were unable to recover it. 00:36:01.005 [2024-07-26 06:28:12.291857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.005 [2024-07-26 06:28:12.292010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.005 [2024-07-26 06:28:12.292049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.005 [2024-07-26 06:28:12.292082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.005 [2024-07-26 06:28:12.292102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.005 [2024-07-26 06:28:12.292142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.005 qpair failed and we were unable to recover it. 00:36:01.005 [2024-07-26 06:28:12.301865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.005 [2024-07-26 06:28:12.302015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.005 [2024-07-26 06:28:12.302048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.005 [2024-07-26 06:28:12.302083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.005 [2024-07-26 06:28:12.302103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.005 [2024-07-26 06:28:12.302142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.005 qpair failed and we were unable to recover it. 00:36:01.005 [2024-07-26 06:28:12.311939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.005 [2024-07-26 06:28:12.312102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.005 [2024-07-26 06:28:12.312136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.005 [2024-07-26 06:28:12.312158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.005 [2024-07-26 06:28:12.312176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.005 [2024-07-26 06:28:12.312215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.005 qpair failed and we were unable to recover it. 00:36:01.005 [2024-07-26 06:28:12.321914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.005 [2024-07-26 06:28:12.322085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.005 [2024-07-26 06:28:12.322118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.005 [2024-07-26 06:28:12.322141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.005 [2024-07-26 06:28:12.322159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.005 [2024-07-26 06:28:12.322198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.005 qpair failed and we were unable to recover it. 00:36:01.005 [2024-07-26 06:28:12.331955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.005 [2024-07-26 06:28:12.332101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.006 [2024-07-26 06:28:12.332135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.006 [2024-07-26 06:28:12.332157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.006 [2024-07-26 06:28:12.332174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.006 [2024-07-26 06:28:12.332219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.006 qpair failed and we were unable to recover it. 00:36:01.266 [2024-07-26 06:28:12.342014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.266 [2024-07-26 06:28:12.342171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.266 [2024-07-26 06:28:12.342210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.266 [2024-07-26 06:28:12.342232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.266 [2024-07-26 06:28:12.342250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.266 [2024-07-26 06:28:12.342289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.266 qpair failed and we were unable to recover it. 00:36:01.266 [2024-07-26 06:28:12.351997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.266 [2024-07-26 06:28:12.352158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.266 [2024-07-26 06:28:12.352191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.266 [2024-07-26 06:28:12.352214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.266 [2024-07-26 06:28:12.352231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.266 [2024-07-26 06:28:12.352271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.266 qpair failed and we were unable to recover it. 00:36:01.266 [2024-07-26 06:28:12.362082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.266 [2024-07-26 06:28:12.362275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.266 [2024-07-26 06:28:12.362312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.266 [2024-07-26 06:28:12.362335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.266 [2024-07-26 06:28:12.362352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.266 [2024-07-26 06:28:12.362392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.266 qpair failed and we were unable to recover it. 00:36:01.266 [2024-07-26 06:28:12.372117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.266 [2024-07-26 06:28:12.372263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.266 [2024-07-26 06:28:12.372297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.266 [2024-07-26 06:28:12.372319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.266 [2024-07-26 06:28:12.372337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.266 [2024-07-26 06:28:12.372378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.266 qpair failed and we were unable to recover it. 00:36:01.266 [2024-07-26 06:28:12.382087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.266 [2024-07-26 06:28:12.382234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.266 [2024-07-26 06:28:12.382272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.266 [2024-07-26 06:28:12.382295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.266 [2024-07-26 06:28:12.382313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.266 [2024-07-26 06:28:12.382353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.266 qpair failed and we were unable to recover it. 00:36:01.266 [2024-07-26 06:28:12.392132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.266 [2024-07-26 06:28:12.392294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.266 [2024-07-26 06:28:12.392327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.266 [2024-07-26 06:28:12.392363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.266 [2024-07-26 06:28:12.392382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.266 [2024-07-26 06:28:12.392421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.266 qpair failed and we were unable to recover it. 00:36:01.266 [2024-07-26 06:28:12.402288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.266 [2024-07-26 06:28:12.402460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.266 [2024-07-26 06:28:12.402499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.266 [2024-07-26 06:28:12.402523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.266 [2024-07-26 06:28:12.402541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.266 [2024-07-26 06:28:12.402582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.266 qpair failed and we were unable to recover it. 00:36:01.266 [2024-07-26 06:28:12.412234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.266 [2024-07-26 06:28:12.412388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.266 [2024-07-26 06:28:12.412423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.266 [2024-07-26 06:28:12.412446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.266 [2024-07-26 06:28:12.412464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.266 [2024-07-26 06:28:12.412504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.266 qpair failed and we were unable to recover it. 00:36:01.266 [2024-07-26 06:28:12.422284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.266 [2024-07-26 06:28:12.422430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.266 [2024-07-26 06:28:12.422465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.266 [2024-07-26 06:28:12.422487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.266 [2024-07-26 06:28:12.422511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.266 [2024-07-26 06:28:12.422564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.266 qpair failed and we were unable to recover it. 00:36:01.266 [2024-07-26 06:28:12.432223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.266 [2024-07-26 06:28:12.432374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.266 [2024-07-26 06:28:12.432408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.266 [2024-07-26 06:28:12.432430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.266 [2024-07-26 06:28:12.432448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.266 [2024-07-26 06:28:12.432487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.266 qpair failed and we were unable to recover it. 00:36:01.266 [2024-07-26 06:28:12.442268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.266 [2024-07-26 06:28:12.442414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.266 [2024-07-26 06:28:12.442448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.266 [2024-07-26 06:28:12.442470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.266 [2024-07-26 06:28:12.442488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.266 [2024-07-26 06:28:12.442541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.266 qpair failed and we were unable to recover it. 00:36:01.266 [2024-07-26 06:28:12.452319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.266 [2024-07-26 06:28:12.452480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.266 [2024-07-26 06:28:12.452514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.266 [2024-07-26 06:28:12.452536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.266 [2024-07-26 06:28:12.452554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.266 [2024-07-26 06:28:12.452593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.267 qpair failed and we were unable to recover it. 00:36:01.267 [2024-07-26 06:28:12.462470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.267 [2024-07-26 06:28:12.462623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.267 [2024-07-26 06:28:12.462656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.267 [2024-07-26 06:28:12.462678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.267 [2024-07-26 06:28:12.462695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.267 [2024-07-26 06:28:12.462735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.267 qpair failed and we were unable to recover it. 00:36:01.267 [2024-07-26 06:28:12.472540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.267 [2024-07-26 06:28:12.472748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.267 [2024-07-26 06:28:12.472781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.267 [2024-07-26 06:28:12.472803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.267 [2024-07-26 06:28:12.472821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.267 [2024-07-26 06:28:12.472861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.267 qpair failed and we were unable to recover it. 00:36:01.267 [2024-07-26 06:28:12.482377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.267 [2024-07-26 06:28:12.482537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.267 [2024-07-26 06:28:12.482571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.267 [2024-07-26 06:28:12.482593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.267 [2024-07-26 06:28:12.482610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.267 [2024-07-26 06:28:12.482649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.267 qpair failed and we were unable to recover it. 00:36:01.267 [2024-07-26 06:28:12.492414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.267 [2024-07-26 06:28:12.492562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.267 [2024-07-26 06:28:12.492596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.267 [2024-07-26 06:28:12.492618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.267 [2024-07-26 06:28:12.492636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.267 [2024-07-26 06:28:12.492675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.267 qpair failed and we were unable to recover it. 00:36:01.267 [2024-07-26 06:28:12.502415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.267 [2024-07-26 06:28:12.502565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.267 [2024-07-26 06:28:12.502598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.267 [2024-07-26 06:28:12.502621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.267 [2024-07-26 06:28:12.502638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.267 [2024-07-26 06:28:12.502677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.267 qpair failed and we were unable to recover it. 00:36:01.267 [2024-07-26 06:28:12.512486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.267 [2024-07-26 06:28:12.512660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.267 [2024-07-26 06:28:12.512694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.267 [2024-07-26 06:28:12.512715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.267 [2024-07-26 06:28:12.512738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.267 [2024-07-26 06:28:12.512779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.267 qpair failed and we were unable to recover it. 00:36:01.267 [2024-07-26 06:28:12.522484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.267 [2024-07-26 06:28:12.522630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.267 [2024-07-26 06:28:12.522663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.267 [2024-07-26 06:28:12.522685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.267 [2024-07-26 06:28:12.522702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.267 [2024-07-26 06:28:12.522743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.267 qpair failed and we were unable to recover it. 00:36:01.267 [2024-07-26 06:28:12.532615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.267 [2024-07-26 06:28:12.532769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.267 [2024-07-26 06:28:12.532802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.267 [2024-07-26 06:28:12.532825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.267 [2024-07-26 06:28:12.532842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.267 [2024-07-26 06:28:12.532906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.267 qpair failed and we were unable to recover it. 00:36:01.267 [2024-07-26 06:28:12.542558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.267 [2024-07-26 06:28:12.542737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.267 [2024-07-26 06:28:12.542770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.267 [2024-07-26 06:28:12.542793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.267 [2024-07-26 06:28:12.542811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.267 [2024-07-26 06:28:12.542862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.267 qpair failed and we were unable to recover it. 00:36:01.267 [2024-07-26 06:28:12.552630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.267 [2024-07-26 06:28:12.552781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.267 [2024-07-26 06:28:12.552815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.267 [2024-07-26 06:28:12.552838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.267 [2024-07-26 06:28:12.552855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.267 [2024-07-26 06:28:12.552895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.267 qpair failed and we were unable to recover it. 00:36:01.267 [2024-07-26 06:28:12.562585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.267 [2024-07-26 06:28:12.562733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.267 [2024-07-26 06:28:12.562766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.267 [2024-07-26 06:28:12.562788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.267 [2024-07-26 06:28:12.562805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.267 [2024-07-26 06:28:12.562844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.267 qpair failed and we were unable to recover it. 00:36:01.267 [2024-07-26 06:28:12.572674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.267 [2024-07-26 06:28:12.572822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.267 [2024-07-26 06:28:12.572857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.267 [2024-07-26 06:28:12.572878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.267 [2024-07-26 06:28:12.572896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.267 [2024-07-26 06:28:12.572935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.267 qpair failed and we were unable to recover it. 00:36:01.267 [2024-07-26 06:28:12.582713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.267 [2024-07-26 06:28:12.582862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.267 [2024-07-26 06:28:12.582896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.267 [2024-07-26 06:28:12.582918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.267 [2024-07-26 06:28:12.582935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.267 [2024-07-26 06:28:12.582988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.267 qpair failed and we were unable to recover it. 00:36:01.267 [2024-07-26 06:28:12.592769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.268 [2024-07-26 06:28:12.592931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.268 [2024-07-26 06:28:12.592964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.268 [2024-07-26 06:28:12.592987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.268 [2024-07-26 06:28:12.593004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.268 [2024-07-26 06:28:12.593043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.268 qpair failed and we were unable to recover it. 00:36:01.526 [2024-07-26 06:28:12.602840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.526 [2024-07-26 06:28:12.603025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.526 [2024-07-26 06:28:12.603066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.526 [2024-07-26 06:28:12.603097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.527 [2024-07-26 06:28:12.603117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.527 [2024-07-26 06:28:12.603157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.527 qpair failed and we were unable to recover it. 00:36:01.527 [2024-07-26 06:28:12.612773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.527 [2024-07-26 06:28:12.612919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.527 [2024-07-26 06:28:12.612952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.527 [2024-07-26 06:28:12.612974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.527 [2024-07-26 06:28:12.612991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.527 [2024-07-26 06:28:12.613031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.527 qpair failed and we were unable to recover it. 00:36:01.527 [2024-07-26 06:28:12.622776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.527 [2024-07-26 06:28:12.622925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.527 [2024-07-26 06:28:12.622959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.527 [2024-07-26 06:28:12.622981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.527 [2024-07-26 06:28:12.622998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.527 [2024-07-26 06:28:12.623037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.527 qpair failed and we were unable to recover it. 00:36:01.527 [2024-07-26 06:28:12.632912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.527 [2024-07-26 06:28:12.633082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.527 [2024-07-26 06:28:12.633115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.527 [2024-07-26 06:28:12.633137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.527 [2024-07-26 06:28:12.633155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.527 [2024-07-26 06:28:12.633195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.527 qpair failed and we were unable to recover it. 00:36:01.527 [2024-07-26 06:28:12.642833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.527 [2024-07-26 06:28:12.643001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.527 [2024-07-26 06:28:12.643035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.527 [2024-07-26 06:28:12.643057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.527 [2024-07-26 06:28:12.643085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.527 [2024-07-26 06:28:12.643124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.527 qpair failed and we were unable to recover it. 00:36:01.527 [2024-07-26 06:28:12.652967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.527 [2024-07-26 06:28:12.653136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.527 [2024-07-26 06:28:12.653175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.527 [2024-07-26 06:28:12.653198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.527 [2024-07-26 06:28:12.653216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.527 [2024-07-26 06:28:12.653272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.527 qpair failed and we were unable to recover it. 00:36:01.527 [2024-07-26 06:28:12.662985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.527 [2024-07-26 06:28:12.663153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.527 [2024-07-26 06:28:12.663189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.527 [2024-07-26 06:28:12.663212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.527 [2024-07-26 06:28:12.663229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.527 [2024-07-26 06:28:12.663269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.527 qpair failed and we were unable to recover it. 00:36:01.527 [2024-07-26 06:28:12.672954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.527 [2024-07-26 06:28:12.673126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.527 [2024-07-26 06:28:12.673160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.527 [2024-07-26 06:28:12.673183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.527 [2024-07-26 06:28:12.673201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.527 [2024-07-26 06:28:12.673240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.527 qpair failed and we were unable to recover it. 00:36:01.527 [2024-07-26 06:28:12.683204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.527 [2024-07-26 06:28:12.683398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.527 [2024-07-26 06:28:12.683432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.527 [2024-07-26 06:28:12.683454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.527 [2024-07-26 06:28:12.683471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.527 [2024-07-26 06:28:12.683511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.527 qpair failed and we were unable to recover it. 00:36:01.527 [2024-07-26 06:28:12.693026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.527 [2024-07-26 06:28:12.693179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.527 [2024-07-26 06:28:12.693217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.527 [2024-07-26 06:28:12.693241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.527 [2024-07-26 06:28:12.693259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.527 [2024-07-26 06:28:12.693298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.527 qpair failed and we were unable to recover it. 00:36:01.527 [2024-07-26 06:28:12.703089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.527 [2024-07-26 06:28:12.703229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.527 [2024-07-26 06:28:12.703262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.527 [2024-07-26 06:28:12.703285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.527 [2024-07-26 06:28:12.703303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.527 [2024-07-26 06:28:12.703342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.527 qpair failed and we were unable to recover it. 00:36:01.527 [2024-07-26 06:28:12.713132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.527 [2024-07-26 06:28:12.713298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.527 [2024-07-26 06:28:12.713332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.527 [2024-07-26 06:28:12.713354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.527 [2024-07-26 06:28:12.713372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.527 [2024-07-26 06:28:12.713411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.527 qpair failed and we were unable to recover it. 00:36:01.527 [2024-07-26 06:28:12.723159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.527 [2024-07-26 06:28:12.723324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.527 [2024-07-26 06:28:12.723357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.527 [2024-07-26 06:28:12.723379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.527 [2024-07-26 06:28:12.723397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.527 [2024-07-26 06:28:12.723436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.527 qpair failed and we were unable to recover it. 00:36:01.527 [2024-07-26 06:28:12.733203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.527 [2024-07-26 06:28:12.733350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.527 [2024-07-26 06:28:12.733389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.527 [2024-07-26 06:28:12.733412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.527 [2024-07-26 06:28:12.733429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.528 [2024-07-26 06:28:12.733473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.528 qpair failed and we were unable to recover it. 00:36:01.528 [2024-07-26 06:28:12.743167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.528 [2024-07-26 06:28:12.743337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.528 [2024-07-26 06:28:12.743370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.528 [2024-07-26 06:28:12.743392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.528 [2024-07-26 06:28:12.743410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.528 [2024-07-26 06:28:12.743449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.528 qpair failed and we were unable to recover it. 00:36:01.528 [2024-07-26 06:28:12.753176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.528 [2024-07-26 06:28:12.753330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.528 [2024-07-26 06:28:12.753363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.528 [2024-07-26 06:28:12.753386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.528 [2024-07-26 06:28:12.753403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.528 [2024-07-26 06:28:12.753442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.528 qpair failed and we were unable to recover it. 00:36:01.528 [2024-07-26 06:28:12.763230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.528 [2024-07-26 06:28:12.763409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.528 [2024-07-26 06:28:12.763442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.528 [2024-07-26 06:28:12.763463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.528 [2024-07-26 06:28:12.763481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.528 [2024-07-26 06:28:12.763520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.528 qpair failed and we were unable to recover it. 00:36:01.528 [2024-07-26 06:28:12.773263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.528 [2024-07-26 06:28:12.773416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.528 [2024-07-26 06:28:12.773449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.528 [2024-07-26 06:28:12.773471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.528 [2024-07-26 06:28:12.773489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.528 [2024-07-26 06:28:12.773528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.528 qpair failed and we were unable to recover it. 00:36:01.528 [2024-07-26 06:28:12.783241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.528 [2024-07-26 06:28:12.783382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.528 [2024-07-26 06:28:12.783421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.528 [2024-07-26 06:28:12.783444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.528 [2024-07-26 06:28:12.783461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.528 [2024-07-26 06:28:12.783501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.528 qpair failed and we were unable to recover it. 00:36:01.528 [2024-07-26 06:28:12.793326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.528 [2024-07-26 06:28:12.793488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.528 [2024-07-26 06:28:12.793521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.528 [2024-07-26 06:28:12.793544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.528 [2024-07-26 06:28:12.793561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.528 [2024-07-26 06:28:12.793600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.528 qpair failed and we were unable to recover it. 00:36:01.528 [2024-07-26 06:28:12.803311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.528 [2024-07-26 06:28:12.803462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.528 [2024-07-26 06:28:12.803494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.528 [2024-07-26 06:28:12.803516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.528 [2024-07-26 06:28:12.803534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.528 [2024-07-26 06:28:12.803572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.528 qpair failed and we were unable to recover it. 00:36:01.528 [2024-07-26 06:28:12.813393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.528 [2024-07-26 06:28:12.813558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.528 [2024-07-26 06:28:12.813592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.528 [2024-07-26 06:28:12.813614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.528 [2024-07-26 06:28:12.813632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.528 [2024-07-26 06:28:12.813671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.528 qpair failed and we were unable to recover it. 00:36:01.528 [2024-07-26 06:28:12.823401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.528 [2024-07-26 06:28:12.823547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.528 [2024-07-26 06:28:12.823580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.528 [2024-07-26 06:28:12.823603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.528 [2024-07-26 06:28:12.823625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.528 [2024-07-26 06:28:12.823666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.528 qpair failed and we were unable to recover it. 00:36:01.528 [2024-07-26 06:28:12.833422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.528 [2024-07-26 06:28:12.833572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.528 [2024-07-26 06:28:12.833604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.528 [2024-07-26 06:28:12.833627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.528 [2024-07-26 06:28:12.833644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.528 [2024-07-26 06:28:12.833683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.528 qpair failed and we were unable to recover it. 00:36:01.528 [2024-07-26 06:28:12.843454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.528 [2024-07-26 06:28:12.843597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.528 [2024-07-26 06:28:12.843630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.528 [2024-07-26 06:28:12.843651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.528 [2024-07-26 06:28:12.843668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.528 [2024-07-26 06:28:12.843707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.528 qpair failed and we were unable to recover it. 00:36:01.528 [2024-07-26 06:28:12.853503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.528 [2024-07-26 06:28:12.853653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.528 [2024-07-26 06:28:12.853686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.528 [2024-07-26 06:28:12.853708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.528 [2024-07-26 06:28:12.853726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.528 [2024-07-26 06:28:12.853764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.528 qpair failed and we were unable to recover it. 00:36:01.787 [2024-07-26 06:28:12.863490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.787 [2024-07-26 06:28:12.863646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.787 [2024-07-26 06:28:12.863679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.787 [2024-07-26 06:28:12.863701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.787 [2024-07-26 06:28:12.863718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.787 [2024-07-26 06:28:12.863757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.788 qpair failed and we were unable to recover it. 00:36:01.788 [2024-07-26 06:28:12.873678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.788 [2024-07-26 06:28:12.873881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.788 [2024-07-26 06:28:12.873913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.788 [2024-07-26 06:28:12.873935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.788 [2024-07-26 06:28:12.873953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.788 [2024-07-26 06:28:12.873992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.788 qpair failed and we were unable to recover it. 00:36:01.788 [2024-07-26 06:28:12.883634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.788 [2024-07-26 06:28:12.883788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.788 [2024-07-26 06:28:12.883822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.788 [2024-07-26 06:28:12.883843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.788 [2024-07-26 06:28:12.883861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.788 [2024-07-26 06:28:12.883899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.788 qpair failed and we were unable to recover it. 00:36:01.788 [2024-07-26 06:28:12.893658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.788 [2024-07-26 06:28:12.893809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.788 [2024-07-26 06:28:12.893843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.788 [2024-07-26 06:28:12.893865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.788 [2024-07-26 06:28:12.893883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.788 [2024-07-26 06:28:12.893922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.788 qpair failed and we were unable to recover it. 00:36:01.788 [2024-07-26 06:28:12.903671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.788 [2024-07-26 06:28:12.903830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.788 [2024-07-26 06:28:12.903871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.788 [2024-07-26 06:28:12.903895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.788 [2024-07-26 06:28:12.903926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.788 [2024-07-26 06:28:12.903967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.788 qpair failed and we were unable to recover it. 00:36:01.788 [2024-07-26 06:28:12.913716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.788 [2024-07-26 06:28:12.913877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.788 [2024-07-26 06:28:12.913915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.788 [2024-07-26 06:28:12.913938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.788 [2024-07-26 06:28:12.913961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.788 [2024-07-26 06:28:12.914002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.788 qpair failed and we were unable to recover it. 00:36:01.788 [2024-07-26 06:28:12.923776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.788 [2024-07-26 06:28:12.923975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.788 [2024-07-26 06:28:12.924009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.788 [2024-07-26 06:28:12.924031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.788 [2024-07-26 06:28:12.924048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.788 [2024-07-26 06:28:12.924107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.788 qpair failed and we were unable to recover it. 00:36:01.788 [2024-07-26 06:28:12.933732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.788 [2024-07-26 06:28:12.933874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.788 [2024-07-26 06:28:12.933907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.788 [2024-07-26 06:28:12.933930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.788 [2024-07-26 06:28:12.933948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.788 [2024-07-26 06:28:12.933988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.788 qpair failed and we were unable to recover it. 00:36:01.788 [2024-07-26 06:28:12.943737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.788 [2024-07-26 06:28:12.943880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.788 [2024-07-26 06:28:12.943914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.788 [2024-07-26 06:28:12.943936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.788 [2024-07-26 06:28:12.943953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.788 [2024-07-26 06:28:12.943993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.788 qpair failed and we were unable to recover it. 00:36:01.788 [2024-07-26 06:28:12.953796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.788 [2024-07-26 06:28:12.953957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.788 [2024-07-26 06:28:12.953990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.788 [2024-07-26 06:28:12.954013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.788 [2024-07-26 06:28:12.954030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:36:01.788 [2024-07-26 06:28:12.954077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:01.788 qpair failed and we were unable to recover it. 00:36:01.788 [2024-07-26 06:28:12.963874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.788 [2024-07-26 06:28:12.964030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.788 [2024-07-26 06:28:12.964080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.788 [2024-07-26 06:28:12.964107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.788 [2024-07-26 06:28:12.964127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:01.788 [2024-07-26 06:28:12.964171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.788 qpair failed and we were unable to recover it. 00:36:01.788 [2024-07-26 06:28:12.973853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.788 [2024-07-26 06:28:12.973995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.788 [2024-07-26 06:28:12.974030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.788 [2024-07-26 06:28:12.974053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.788 [2024-07-26 06:28:12.974080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:01.788 [2024-07-26 06:28:12.974121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.788 qpair failed and we were unable to recover it. 00:36:01.788 [2024-07-26 06:28:12.983938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.788 [2024-07-26 06:28:12.984108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.788 [2024-07-26 06:28:12.984144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.788 [2024-07-26 06:28:12.984168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.788 [2024-07-26 06:28:12.984186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:01.788 [2024-07-26 06:28:12.984226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.788 qpair failed and we were unable to recover it. 00:36:01.788 [2024-07-26 06:28:12.993876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.788 [2024-07-26 06:28:12.994053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.788 [2024-07-26 06:28:12.994096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.788 [2024-07-26 06:28:12.994119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.788 [2024-07-26 06:28:12.994137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:01.788 [2024-07-26 06:28:12.994182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.788 qpair failed and we were unable to recover it. 00:36:01.788 [2024-07-26 06:28:13.004015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.789 [2024-07-26 06:28:13.004179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.789 [2024-07-26 06:28:13.004214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.789 [2024-07-26 06:28:13.004242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.789 [2024-07-26 06:28:13.004261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:01.789 [2024-07-26 06:28:13.004301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.789 qpair failed and we were unable to recover it. 00:36:01.789 [2024-07-26 06:28:13.013965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.789 [2024-07-26 06:28:13.014121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.789 [2024-07-26 06:28:13.014155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.789 [2024-07-26 06:28:13.014178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.789 [2024-07-26 06:28:13.014196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:01.789 [2024-07-26 06:28:13.014235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.789 qpair failed and we were unable to recover it. 00:36:01.789 [2024-07-26 06:28:13.023987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.789 [2024-07-26 06:28:13.024142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.789 [2024-07-26 06:28:13.024181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.789 [2024-07-26 06:28:13.024205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.789 [2024-07-26 06:28:13.024222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:01.789 [2024-07-26 06:28:13.024262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.789 qpair failed and we were unable to recover it. 00:36:01.789 [2024-07-26 06:28:13.034025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.789 [2024-07-26 06:28:13.034186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.789 [2024-07-26 06:28:13.034220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.789 [2024-07-26 06:28:13.034242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.789 [2024-07-26 06:28:13.034260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:01.789 [2024-07-26 06:28:13.034300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.789 qpair failed and we were unable to recover it. 00:36:01.789 [2024-07-26 06:28:13.044033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.789 [2024-07-26 06:28:13.044193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.789 [2024-07-26 06:28:13.044227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.789 [2024-07-26 06:28:13.044250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.789 [2024-07-26 06:28:13.044268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:01.789 [2024-07-26 06:28:13.044307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.789 qpair failed and we were unable to recover it. 00:36:01.789 [2024-07-26 06:28:13.054116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.789 [2024-07-26 06:28:13.054256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.789 [2024-07-26 06:28:13.054290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.789 [2024-07-26 06:28:13.054312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.789 [2024-07-26 06:28:13.054330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:01.789 [2024-07-26 06:28:13.054369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.789 qpair failed and we were unable to recover it. 00:36:01.789 [2024-07-26 06:28:13.064147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.789 [2024-07-26 06:28:13.064292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.789 [2024-07-26 06:28:13.064326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.789 [2024-07-26 06:28:13.064348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.789 [2024-07-26 06:28:13.064366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:01.789 [2024-07-26 06:28:13.064405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.789 qpair failed and we were unable to recover it. 00:36:01.789 [2024-07-26 06:28:13.074186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.789 [2024-07-26 06:28:13.074350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.789 [2024-07-26 06:28:13.074384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.789 [2024-07-26 06:28:13.074406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.789 [2024-07-26 06:28:13.074424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:01.789 [2024-07-26 06:28:13.074462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.789 qpair failed and we were unable to recover it. 00:36:01.789 [2024-07-26 06:28:13.084249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.789 [2024-07-26 06:28:13.084411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.789 [2024-07-26 06:28:13.084446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.789 [2024-07-26 06:28:13.084468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.789 [2024-07-26 06:28:13.084487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:01.789 [2024-07-26 06:28:13.084526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.789 qpair failed and we were unable to recover it. 00:36:01.789 [2024-07-26 06:28:13.094238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.789 [2024-07-26 06:28:13.094402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.789 [2024-07-26 06:28:13.094441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.789 [2024-07-26 06:28:13.094465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.789 [2024-07-26 06:28:13.094483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:01.789 [2024-07-26 06:28:13.094522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.789 qpair failed and we were unable to recover it. 00:36:01.789 [2024-07-26 06:28:13.104202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.789 [2024-07-26 06:28:13.104347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.789 [2024-07-26 06:28:13.104380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.789 [2024-07-26 06:28:13.104403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.789 [2024-07-26 06:28:13.104421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:01.789 [2024-07-26 06:28:13.104460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.789 qpair failed and we were unable to recover it. 00:36:01.789 [2024-07-26 06:28:13.114285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.789 [2024-07-26 06:28:13.114438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.789 [2024-07-26 06:28:13.114471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.789 [2024-07-26 06:28:13.114494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.789 [2024-07-26 06:28:13.114511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:01.789 [2024-07-26 06:28:13.114563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:01.789 qpair failed and we were unable to recover it. 00:36:02.050 [2024-07-26 06:28:13.124338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.050 [2024-07-26 06:28:13.124511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.050 [2024-07-26 06:28:13.124544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.050 [2024-07-26 06:28:13.124570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.050 [2024-07-26 06:28:13.124588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:02.050 [2024-07-26 06:28:13.124627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-07-26 06:28:13.134380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.050 [2024-07-26 06:28:13.134527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.050 [2024-07-26 06:28:13.134560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.050 [2024-07-26 06:28:13.134582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.050 [2024-07-26 06:28:13.134599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:02.050 [2024-07-26 06:28:13.134643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-07-26 06:28:13.144379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.050 [2024-07-26 06:28:13.144528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.050 [2024-07-26 06:28:13.144561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.050 [2024-07-26 06:28:13.144583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.050 [2024-07-26 06:28:13.144601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:02.050 [2024-07-26 06:28:13.144640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-07-26 06:28:13.154402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.050 [2024-07-26 06:28:13.154554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.050 [2024-07-26 06:28:13.154587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.050 [2024-07-26 06:28:13.154609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.050 [2024-07-26 06:28:13.154626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:02.050 [2024-07-26 06:28:13.154666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-07-26 06:28:13.164493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.050 [2024-07-26 06:28:13.164694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.050 [2024-07-26 06:28:13.164728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.050 [2024-07-26 06:28:13.164750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.050 [2024-07-26 06:28:13.164769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:02.050 [2024-07-26 06:28:13.164808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-07-26 06:28:13.174505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.050 [2024-07-26 06:28:13.174655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.050 [2024-07-26 06:28:13.174689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.050 [2024-07-26 06:28:13.174712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.050 [2024-07-26 06:28:13.174729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:02.050 [2024-07-26 06:28:13.174768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-07-26 06:28:13.184452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.050 [2024-07-26 06:28:13.184598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.050 [2024-07-26 06:28:13.184638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.050 [2024-07-26 06:28:13.184662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.050 [2024-07-26 06:28:13.184680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:02.050 [2024-07-26 06:28:13.184719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-07-26 06:28:13.194553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.050 [2024-07-26 06:28:13.194705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.050 [2024-07-26 06:28:13.194744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.050 [2024-07-26 06:28:13.194766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.050 [2024-07-26 06:28:13.194784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:02.050 [2024-07-26 06:28:13.194824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-07-26 06:28:13.204595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.050 [2024-07-26 06:28:13.204754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.050 [2024-07-26 06:28:13.204787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.050 [2024-07-26 06:28:13.204809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.050 [2024-07-26 06:28:13.204827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:02.050 [2024-07-26 06:28:13.204865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-07-26 06:28:13.214617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.050 [2024-07-26 06:28:13.214777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.050 [2024-07-26 06:28:13.214810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.050 [2024-07-26 06:28:13.214832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.050 [2024-07-26 06:28:13.214849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:02.050 [2024-07-26 06:28:13.214888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-07-26 06:28:13.224656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.050 [2024-07-26 06:28:13.224851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.050 [2024-07-26 06:28:13.224885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.050 [2024-07-26 06:28:13.224907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.050 [2024-07-26 06:28:13.224925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:02.050 [2024-07-26 06:28:13.224969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.050 qpair failed and we were unable to recover it. 00:36:02.050 [2024-07-26 06:28:13.234636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.050 [2024-07-26 06:28:13.234799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.050 [2024-07-26 06:28:13.234832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.051 [2024-07-26 06:28:13.234854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.051 [2024-07-26 06:28:13.234872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:02.051 [2024-07-26 06:28:13.234911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-07-26 06:28:13.244687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.051 [2024-07-26 06:28:13.244836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.051 [2024-07-26 06:28:13.244869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.051 [2024-07-26 06:28:13.244891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.051 [2024-07-26 06:28:13.244909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:02.051 [2024-07-26 06:28:13.244948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-07-26 06:28:13.254694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.051 [2024-07-26 06:28:13.254858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.051 [2024-07-26 06:28:13.254892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.051 [2024-07-26 06:28:13.254914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.051 [2024-07-26 06:28:13.254931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:02.051 [2024-07-26 06:28:13.254970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-07-26 06:28:13.264739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.051 [2024-07-26 06:28:13.264888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.051 [2024-07-26 06:28:13.264922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.051 [2024-07-26 06:28:13.264944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.051 [2024-07-26 06:28:13.264962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:02.051 [2024-07-26 06:28:13.265000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-07-26 06:28:13.274770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.051 [2024-07-26 06:28:13.274928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.051 [2024-07-26 06:28:13.274961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.051 [2024-07-26 06:28:13.274984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.051 [2024-07-26 06:28:13.275001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:02.051 [2024-07-26 06:28:13.275040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-07-26 06:28:13.284788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.051 [2024-07-26 06:28:13.284936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.051 [2024-07-26 06:28:13.284969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.051 [2024-07-26 06:28:13.284991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.051 [2024-07-26 06:28:13.285009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:02.051 [2024-07-26 06:28:13.285047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-07-26 06:28:13.294844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.051 [2024-07-26 06:28:13.294989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.051 [2024-07-26 06:28:13.295022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.051 [2024-07-26 06:28:13.295045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.051 [2024-07-26 06:28:13.295070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:02.051 [2024-07-26 06:28:13.295111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-07-26 06:28:13.304827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.051 [2024-07-26 06:28:13.304962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.051 [2024-07-26 06:28:13.304995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.051 [2024-07-26 06:28:13.305018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.051 [2024-07-26 06:28:13.305035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:02.051 [2024-07-26 06:28:13.305086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-07-26 06:28:13.314836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.051 [2024-07-26 06:28:13.314986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.051 [2024-07-26 06:28:13.315019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.051 [2024-07-26 06:28:13.315041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.051 [2024-07-26 06:28:13.315071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:02.051 [2024-07-26 06:28:13.315112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-07-26 06:28:13.324877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.051 [2024-07-26 06:28:13.325026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.051 [2024-07-26 06:28:13.325068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.051 [2024-07-26 06:28:13.325095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.051 [2024-07-26 06:28:13.325114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:36:02.051 [2024-07-26 06:28:13.325162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-07-26 06:28:13.334924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.051 [2024-07-26 06:28:13.335087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.051 [2024-07-26 06:28:13.335128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.051 [2024-07-26 06:28:13.335153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.051 [2024-07-26 06:28:13.335173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500021ff00 00:36:02.051 [2024-07-26 06:28:13.335215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-07-26 06:28:13.344905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.051 [2024-07-26 06:28:13.345054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.051 [2024-07-26 06:28:13.345097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.051 [2024-07-26 06:28:13.345122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.051 [2024-07-26 06:28:13.345140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500021ff00 00:36:02.051 [2024-07-26 06:28:13.345189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-07-26 06:28:13.354986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.051 [2024-07-26 06:28:13.355162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.051 [2024-07-26 06:28:13.355202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.051 [2024-07-26 06:28:13.355227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.051 [2024-07-26 06:28:13.355245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000210000 00:36:02.051 [2024-07-26 06:28:13.355286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.051 qpair failed and we were unable to recover it. 00:36:02.051 [2024-07-26 06:28:13.365014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.051 [2024-07-26 06:28:13.365182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.051 [2024-07-26 06:28:13.365221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.051 [2024-07-26 06:28:13.365245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.051 [2024-07-26 06:28:13.365263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000210000 00:36:02.052 [2024-07-26 06:28:13.365304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.052 qpair failed and we were unable to recover it. 00:36:02.052 [2024-07-26 06:28:13.365665] nvme_ctrlr.c:4480:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:36:02.052 A controller has encountered a failure and is being reset. 00:36:02.052 Controller properly reset. 00:36:02.309 Initializing NVMe Controllers 00:36:02.309 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:02.309 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:02.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:02.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:02.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:02.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:02.309 Initialization complete. Launching workers. 00:36:02.309 Starting thread on core 1 00:36:02.309 Starting thread on core 2 00:36:02.309 Starting thread on core 3 00:36:02.309 Starting thread on core 0 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:02.309 00:36:02.309 real 0m11.600s 00:36:02.309 user 0m20.742s 00:36:02.309 sys 0m5.421s 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:02.309 ************************************ 00:36:02.309 END TEST nvmf_target_disconnect_tc2 00:36:02.309 ************************************ 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:02.309 rmmod nvme_tcp 00:36:02.309 rmmod nvme_fabrics 00:36:02.309 rmmod nvme_keyring 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 313413 ']' 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 313413 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 313413 ']' 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 313413 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 313413 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 313413' 00:36:02.309 killing process with pid 313413 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 313413 00:36:02.309 06:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 313413 00:36:03.688 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:36:03.688 06:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:03.688 06:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:03.688 06:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:03.688 06:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:03.688 06:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:03.688 06:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:03.688 06:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:03.688 06:28:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:06.221 06:28:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:06.221 00:36:06.221 real 0m17.485s 00:36:06.221 user 0m49.038s 00:36:06.221 sys 0m7.689s 00:36:06.221 06:28:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:06.221 06:28:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:06.221 ************************************ 00:36:06.221 END TEST nvmf_target_disconnect 00:36:06.221 ************************************ 00:36:06.221 06:28:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:06.221 00:36:06.221 real 7m31.619s 00:36:06.221 user 19m13.846s 00:36:06.221 sys 1m28.920s 00:36:06.221 06:28:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:06.221 06:28:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.221 ************************************ 00:36:06.221 END TEST nvmf_host 00:36:06.221 ************************************ 00:36:06.221 00:36:06.221 real 28m59.223s 00:36:06.221 user 77m50.152s 00:36:06.221 sys 6m2.486s 00:36:06.221 06:28:16 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:06.221 06:28:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:06.221 ************************************ 00:36:06.222 END TEST nvmf_tcp 00:36:06.222 ************************************ 00:36:06.222 06:28:17 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:36:06.222 06:28:17 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:06.222 06:28:17 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:06.222 06:28:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:06.222 06:28:17 -- common/autotest_common.sh@10 -- # set +x 00:36:06.222 ************************************ 00:36:06.222 START TEST spdkcli_nvmf_tcp 00:36:06.222 ************************************ 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:06.222 * Looking for test storage... 00:36:06.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=314743 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 314743 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 314743 ']' 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:06.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:06.222 06:28:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:06.222 [2024-07-26 06:28:17.189758] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:36:06.222 [2024-07-26 06:28:17.189913] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid314743 ] 00:36:06.222 EAL: No free 2048 kB hugepages reported on node 1 00:36:06.222 [2024-07-26 06:28:17.312732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:06.480 [2024-07-26 06:28:17.562203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:06.480 [2024-07-26 06:28:17.562205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:07.046 06:28:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:07.046 06:28:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:36:07.046 06:28:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:07.046 06:28:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:07.046 06:28:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:07.046 06:28:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:07.046 06:28:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:36:07.046 06:28:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:07.046 06:28:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:07.046 06:28:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:07.046 06:28:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:07.046 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:07.046 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:07.046 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:07.046 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:07.046 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:07.046 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:07.046 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:07.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:07.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:07.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:07.046 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:07.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:07.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:07.046 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:07.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:07.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:07.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:07.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:07.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:07.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:07.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:07.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:07.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:36:07.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:07.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:07.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:07.046 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:07.046 ' 00:36:09.581 [2024-07-26 06:28:20.797986] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:10.961 [2024-07-26 06:28:22.039234] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:13.491 [2024-07-26 06:28:24.326772] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:15.392 [2024-07-26 06:28:26.305284] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:16.770 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:16.770 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:16.770 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:16.770 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:16.770 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:16.770 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:16.770 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:16.770 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:16.770 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:16.770 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:16.770 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:16.770 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:16.770 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:16.770 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:16.770 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:16.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:16.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:16.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:16.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:16.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:16.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:16.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:16.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:16.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:16.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:16.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:16.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:16.771 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:16.771 06:28:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:16.771 06:28:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:16.771 06:28:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:16.771 06:28:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:16.771 06:28:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:16.771 06:28:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:16.771 06:28:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:16.771 06:28:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:17.029 06:28:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:17.287 06:28:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:17.287 06:28:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:17.287 06:28:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:17.287 06:28:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:17.287 06:28:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:17.287 06:28:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:17.287 06:28:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:17.287 06:28:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:17.287 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:17.287 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:17.287 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:17.287 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:17.287 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:17.287 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:17.287 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:17.287 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:17.287 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:17.287 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:17.287 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:17.287 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:17.287 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:17.287 ' 00:36:23.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:23.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:23.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:23.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:23.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:23.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:23.862 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:23.862 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:23.862 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:23.862 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:23.862 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:23.862 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:23.862 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:23.862 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:23.862 06:28:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:23.862 06:28:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:23.862 06:28:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:23.862 06:28:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 314743 00:36:23.862 06:28:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 314743 ']' 00:36:23.862 06:28:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 314743 00:36:23.862 06:28:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:36:23.862 06:28:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:23.862 06:28:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 314743 00:36:23.862 06:28:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:23.862 06:28:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:23.862 06:28:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 314743' 00:36:23.862 killing process with pid 314743 00:36:23.862 06:28:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 314743 00:36:23.862 06:28:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 314743 00:36:24.430 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:36:24.430 06:28:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:24.430 06:28:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:24.431 06:28:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 314743 ']' 00:36:24.431 06:28:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 314743 00:36:24.431 06:28:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 314743 ']' 00:36:24.431 06:28:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 314743 00:36:24.431 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (314743) - No such process 00:36:24.431 06:28:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 314743 is not found' 00:36:24.431 Process with pid 314743 is not found 00:36:24.431 06:28:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:24.431 06:28:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:24.431 06:28:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:24.431 00:36:24.431 real 0m18.579s 00:36:24.431 user 0m38.415s 00:36:24.431 sys 0m1.074s 00:36:24.431 06:28:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:24.431 06:28:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:24.431 ************************************ 00:36:24.431 END TEST spdkcli_nvmf_tcp 00:36:24.431 ************************************ 00:36:24.431 06:28:35 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:24.431 06:28:35 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:24.431 06:28:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:24.431 06:28:35 -- common/autotest_common.sh@10 -- # set +x 00:36:24.431 ************************************ 00:36:24.431 START TEST nvmf_identify_passthru 00:36:24.431 ************************************ 00:36:24.431 06:28:35 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:24.431 * Looking for test storage... 00:36:24.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:24.431 06:28:35 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:24.431 06:28:35 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:24.431 06:28:35 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:24.431 06:28:35 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:24.431 06:28:35 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.431 06:28:35 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.431 06:28:35 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.431 06:28:35 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:24.431 06:28:35 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:24.431 06:28:35 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:24.431 06:28:35 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:24.431 06:28:35 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:24.431 06:28:35 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:24.431 06:28:35 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.431 06:28:35 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.431 06:28:35 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.431 06:28:35 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:24.431 06:28:35 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.431 06:28:35 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:24.431 06:28:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:24.431 06:28:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:24.431 06:28:35 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:36:24.431 06:28:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:26.965 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:26.965 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:36:26.965 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:26.965 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:26.965 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:26.965 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:26.965 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:26.965 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:36:26.965 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:26.965 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:36:26.965 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:36:26.965 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:36:26.965 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:26.966 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:26.966 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:26.966 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:26.966 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:26.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:26.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:36:26.966 00:36:26.966 --- 10.0.0.2 ping statistics --- 00:36:26.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:26.966 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:26.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:26.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:36:26.966 00:36:26.966 --- 10.0.0.1 ping statistics --- 00:36:26.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:26.966 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:26.966 06:28:37 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:26.966 06:28:37 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:26.966 06:28:37 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:26.966 06:28:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:26.966 06:28:37 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:26.966 06:28:37 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:36:26.966 06:28:37 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:36:26.966 06:28:37 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:36:26.966 06:28:37 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:36:26.966 06:28:37 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:36:26.966 06:28:37 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:36:26.966 06:28:37 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:26.966 06:28:37 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:26.966 06:28:37 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:36:26.966 06:28:37 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:36:26.966 06:28:37 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:36:26.966 06:28:37 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:36:26.966 06:28:37 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:36:26.966 06:28:37 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:36:26.967 06:28:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:36:26.967 06:28:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:26.967 06:28:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:26.967 EAL: No free 2048 kB hugepages reported on node 1 00:36:31.186 06:28:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:36:31.186 06:28:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:36:31.186 06:28:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:31.186 06:28:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:31.186 EAL: No free 2048 kB hugepages reported on node 1 00:36:36.466 06:28:46 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:36:36.466 06:28:46 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:36.466 06:28:46 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:36.466 06:28:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:36.466 06:28:46 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:36.466 06:28:46 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:36.466 06:28:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:36.466 06:28:46 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=319625 00:36:36.466 06:28:46 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:36.466 06:28:46 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:36.466 06:28:46 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 319625 00:36:36.466 06:28:46 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 319625 ']' 00:36:36.466 06:28:46 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:36.466 06:28:46 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:36.466 06:28:46 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:36.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:36.466 06:28:46 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:36.466 06:28:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:36.466 [2024-07-26 06:28:46.833552] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:36:36.466 [2024-07-26 06:28:46.833710] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:36.466 EAL: No free 2048 kB hugepages reported on node 1 00:36:36.466 [2024-07-26 06:28:46.968786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:36.466 [2024-07-26 06:28:47.228111] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:36.466 [2024-07-26 06:28:47.228174] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:36.466 [2024-07-26 06:28:47.228200] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:36.466 [2024-07-26 06:28:47.228219] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:36.466 [2024-07-26 06:28:47.228239] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:36.466 [2024-07-26 06:28:47.228460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:36.466 [2024-07-26 06:28:47.228534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:36.466 [2024-07-26 06:28:47.228634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:36.466 [2024-07-26 06:28:47.228641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:36.466 06:28:47 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:36.466 06:28:47 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:36:36.466 06:28:47 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:36.466 06:28:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.466 06:28:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:36.466 INFO: Log level set to 20 00:36:36.466 INFO: Requests: 00:36:36.466 { 00:36:36.466 "jsonrpc": "2.0", 00:36:36.466 "method": "nvmf_set_config", 00:36:36.466 "id": 1, 00:36:36.466 "params": { 00:36:36.466 "admin_cmd_passthru": { 00:36:36.466 "identify_ctrlr": true 00:36:36.466 } 00:36:36.466 } 00:36:36.466 } 00:36:36.466 00:36:36.466 INFO: response: 00:36:36.466 { 00:36:36.466 "jsonrpc": "2.0", 00:36:36.466 "id": 1, 00:36:36.466 "result": true 00:36:36.466 } 00:36:36.466 00:36:36.466 06:28:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.466 06:28:47 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:36.466 06:28:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.466 06:28:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:36.466 INFO: Setting log level to 20 00:36:36.466 INFO: Setting log level to 20 00:36:36.466 INFO: Log level set to 20 00:36:36.466 INFO: Log level set to 20 00:36:36.466 INFO: Requests: 00:36:36.466 { 00:36:36.466 "jsonrpc": "2.0", 00:36:36.466 "method": "framework_start_init", 00:36:36.466 "id": 1 00:36:36.466 } 00:36:36.466 00:36:36.466 INFO: Requests: 00:36:36.466 { 00:36:36.466 "jsonrpc": "2.0", 00:36:36.466 "method": "framework_start_init", 00:36:36.466 "id": 1 00:36:36.466 } 00:36:36.466 00:36:37.033 [2024-07-26 06:28:48.098484] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:37.033 INFO: response: 00:36:37.033 { 00:36:37.033 "jsonrpc": "2.0", 00:36:37.033 "id": 1, 00:36:37.033 "result": true 00:36:37.033 } 00:36:37.033 00:36:37.033 INFO: response: 00:36:37.033 { 00:36:37.033 "jsonrpc": "2.0", 00:36:37.033 "id": 1, 00:36:37.033 "result": true 00:36:37.033 } 00:36:37.033 00:36:37.033 06:28:48 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.033 06:28:48 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:37.033 06:28:48 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.033 06:28:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:37.033 INFO: Setting log level to 40 00:36:37.033 INFO: Setting log level to 40 00:36:37.033 INFO: Setting log level to 40 00:36:37.033 [2024-07-26 06:28:48.111266] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:37.033 06:28:48 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.033 06:28:48 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:37.033 06:28:48 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:37.033 06:28:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:37.033 06:28:48 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:36:37.033 06:28:48 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.033 06:28:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:40.318 Nvme0n1 00:36:40.318 06:28:51 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.318 06:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:40.318 06:28:51 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.318 06:28:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:40.318 06:28:51 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.318 06:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:40.318 06:28:51 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.318 06:28:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:40.318 06:28:51 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.318 06:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:40.318 06:28:51 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.318 06:28:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:40.318 [2024-07-26 06:28:51.055423] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:40.318 06:28:51 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.318 06:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:40.318 06:28:51 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.318 06:28:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:40.318 [ 00:36:40.318 { 00:36:40.318 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:40.318 "subtype": "Discovery", 00:36:40.318 "listen_addresses": [], 00:36:40.318 "allow_any_host": true, 00:36:40.318 "hosts": [] 00:36:40.318 }, 00:36:40.318 { 00:36:40.318 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:40.318 "subtype": "NVMe", 00:36:40.318 "listen_addresses": [ 00:36:40.318 { 00:36:40.318 "trtype": "TCP", 00:36:40.318 "adrfam": "IPv4", 00:36:40.318 "traddr": "10.0.0.2", 00:36:40.318 "trsvcid": "4420" 00:36:40.318 } 00:36:40.318 ], 00:36:40.318 "allow_any_host": true, 00:36:40.318 "hosts": [], 00:36:40.318 "serial_number": "SPDK00000000000001", 00:36:40.318 "model_number": "SPDK bdev Controller", 00:36:40.318 "max_namespaces": 1, 00:36:40.318 "min_cntlid": 1, 00:36:40.318 "max_cntlid": 65519, 00:36:40.318 "namespaces": [ 00:36:40.318 { 00:36:40.318 "nsid": 1, 00:36:40.318 "bdev_name": "Nvme0n1", 00:36:40.318 "name": "Nvme0n1", 00:36:40.318 "nguid": "728A3CCD570843EABB48FC623EA94F35", 00:36:40.318 "uuid": "728a3ccd-5708-43ea-bb48-fc623ea94f35" 00:36:40.318 } 00:36:40.318 ] 00:36:40.318 } 00:36:40.318 ] 00:36:40.318 06:28:51 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.318 06:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:40.318 06:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:40.318 06:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:40.318 EAL: No free 2048 kB hugepages reported on node 1 00:36:40.318 06:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:36:40.318 06:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:40.318 06:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:40.318 06:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:40.318 EAL: No free 2048 kB hugepages reported on node 1 00:36:40.577 06:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:36:40.577 06:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:36:40.577 06:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:36:40.577 06:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:40.577 06:28:51 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.577 06:28:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:40.577 06:28:51 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.577 06:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:40.577 06:28:51 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:40.577 06:28:51 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:40.577 06:28:51 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:36:40.577 06:28:51 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:40.577 06:28:51 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:36:40.577 06:28:51 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:40.577 06:28:51 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:40.577 rmmod nvme_tcp 00:36:40.577 rmmod nvme_fabrics 00:36:40.577 rmmod nvme_keyring 00:36:40.577 06:28:51 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:40.577 06:28:51 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:36:40.577 06:28:51 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:36:40.577 06:28:51 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 319625 ']' 00:36:40.577 06:28:51 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 319625 00:36:40.577 06:28:51 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 319625 ']' 00:36:40.577 06:28:51 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 319625 00:36:40.577 06:28:51 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:36:40.577 06:28:51 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:40.577 06:28:51 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 319625 00:36:40.577 06:28:51 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:40.577 06:28:51 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:40.577 06:28:51 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 319625' 00:36:40.577 killing process with pid 319625 00:36:40.577 06:28:51 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 319625 00:36:40.577 06:28:51 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 319625 00:36:43.110 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:36:43.110 06:28:54 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:43.110 06:28:54 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:43.110 06:28:54 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:43.110 06:28:54 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:43.110 06:28:54 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:43.110 06:28:54 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:43.110 06:28:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:43.110 06:28:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:45.641 06:28:56 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:45.641 00:36:45.641 real 0m20.808s 00:36:45.641 user 0m34.097s 00:36:45.641 sys 0m2.699s 00:36:45.641 06:28:56 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:45.641 06:28:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:45.641 ************************************ 00:36:45.641 END TEST nvmf_identify_passthru 00:36:45.641 ************************************ 00:36:45.641 06:28:56 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:45.641 06:28:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:45.641 06:28:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:45.641 06:28:56 -- common/autotest_common.sh@10 -- # set +x 00:36:45.641 ************************************ 00:36:45.641 START TEST nvmf_dif 00:36:45.641 ************************************ 00:36:45.641 06:28:56 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:45.641 * Looking for test storage... 00:36:45.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:45.641 06:28:56 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:45.641 06:28:56 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:45.641 06:28:56 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:45.641 06:28:56 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:45.641 06:28:56 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.641 06:28:56 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.641 06:28:56 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.641 06:28:56 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:45.641 06:28:56 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:45.641 06:28:56 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:45.641 06:28:56 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:45.641 06:28:56 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:45.641 06:28:56 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:45.641 06:28:56 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:45.641 06:28:56 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:45.642 06:28:56 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:45.642 06:28:56 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:45.642 06:28:56 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:45.642 06:28:56 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:45.642 06:28:56 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:45.642 06:28:56 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:45.642 06:28:56 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:45.642 06:28:56 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:36:45.642 06:28:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:47.543 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:47.543 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:47.543 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:47.543 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:47.543 06:28:58 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:47.544 06:28:58 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:47.544 06:28:58 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:47.544 06:28:58 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:47.544 06:28:58 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:47.544 06:28:58 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:47.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:47.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:36:47.544 00:36:47.544 --- 10.0.0.2 ping statistics --- 00:36:47.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:47.544 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:36:47.544 06:28:58 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:47.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:47.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:36:47.544 00:36:47.544 --- 10.0.0.1 ping statistics --- 00:36:47.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:47.544 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:36:47.544 06:28:58 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:47.544 06:28:58 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:36:47.544 06:28:58 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:47.544 06:28:58 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:48.507 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:48.507 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:48.507 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:48.507 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:48.507 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:48.507 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:48.507 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:48.507 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:48.507 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:48.507 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:48.507 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:48.507 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:48.507 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:48.507 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:48.507 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:48.507 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:48.507 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:48.766 06:28:59 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:48.766 06:28:59 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:48.766 06:28:59 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:48.766 06:28:59 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:48.766 06:28:59 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:48.766 06:28:59 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:48.766 06:28:59 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:48.766 06:28:59 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:48.766 06:28:59 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:48.766 06:28:59 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:48.766 06:28:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:48.766 06:28:59 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=323144 00:36:48.766 06:28:59 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:48.766 06:28:59 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 323144 00:36:48.766 06:28:59 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 323144 ']' 00:36:48.766 06:28:59 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:48.766 06:28:59 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:48.766 06:28:59 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:48.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:48.766 06:28:59 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:48.766 06:28:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:48.766 [2024-07-26 06:29:00.058407] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:36:48.766 [2024-07-26 06:29:00.058565] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:49.024 EAL: No free 2048 kB hugepages reported on node 1 00:36:49.024 [2024-07-26 06:29:00.193859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:49.282 [2024-07-26 06:29:00.441380] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:49.282 [2024-07-26 06:29:00.441455] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:49.282 [2024-07-26 06:29:00.441483] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:49.282 [2024-07-26 06:29:00.441507] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:49.282 [2024-07-26 06:29:00.441528] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:49.282 [2024-07-26 06:29:00.441581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:49.850 06:29:00 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:49.850 06:29:00 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:36:49.850 06:29:00 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:49.850 06:29:00 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:49.850 06:29:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:49.850 06:29:01 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:49.850 06:29:01 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:49.850 06:29:01 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:49.850 06:29:01 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.850 06:29:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:49.850 [2024-07-26 06:29:01.009871] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:49.850 06:29:01 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.850 06:29:01 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:49.850 06:29:01 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:49.850 06:29:01 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:49.850 06:29:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:49.850 ************************************ 00:36:49.850 START TEST fio_dif_1_default 00:36:49.850 ************************************ 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:49.850 bdev_null0 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:49.850 [2024-07-26 06:29:01.070259] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:49.850 { 00:36:49.850 "params": { 00:36:49.850 "name": "Nvme$subsystem", 00:36:49.850 "trtype": "$TEST_TRANSPORT", 00:36:49.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:49.850 "adrfam": "ipv4", 00:36:49.850 "trsvcid": "$NVMF_PORT", 00:36:49.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:49.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:49.850 "hdgst": ${hdgst:-false}, 00:36:49.850 "ddgst": ${ddgst:-false} 00:36:49.850 }, 00:36:49.850 "method": "bdev_nvme_attach_controller" 00:36:49.850 } 00:36:49.850 EOF 00:36:49.850 )") 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:36:49.850 06:29:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:49.850 "params": { 00:36:49.851 "name": "Nvme0", 00:36:49.851 "trtype": "tcp", 00:36:49.851 "traddr": "10.0.0.2", 00:36:49.851 "adrfam": "ipv4", 00:36:49.851 "trsvcid": "4420", 00:36:49.851 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:49.851 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:49.851 "hdgst": false, 00:36:49.851 "ddgst": false 00:36:49.851 }, 00:36:49.851 "method": "bdev_nvme_attach_controller" 00:36:49.851 }' 00:36:49.851 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:49.851 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:49.851 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # break 00:36:49.851 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:49.851 06:29:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:50.109 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:50.109 fio-3.35 00:36:50.109 Starting 1 thread 00:36:50.109 EAL: No free 2048 kB hugepages reported on node 1 00:37:02.366 00:37:02.366 filename0: (groupid=0, jobs=1): err= 0: pid=323501: Fri Jul 26 06:29:12 2024 00:37:02.366 read: IOPS=185, BW=743KiB/s (761kB/s)(7440KiB/10017msec) 00:37:02.366 slat (usec): min=7, max=234, avg=14.35, stdev= 8.24 00:37:02.366 clat (usec): min=883, max=43275, avg=21497.81, stdev=20425.65 00:37:02.366 lat (usec): min=895, max=43297, avg=21512.17, stdev=20424.61 00:37:02.366 clat percentiles (usec): 00:37:02.366 | 1.00th=[ 938], 5.00th=[ 971], 10.00th=[ 996], 20.00th=[ 1029], 00:37:02.366 | 30.00th=[ 1057], 40.00th=[ 1074], 50.00th=[41157], 60.00th=[41681], 00:37:02.366 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:37:02.366 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:37:02.366 | 99.99th=[43254] 00:37:02.366 bw ( KiB/s): min= 704, max= 768, per=99.90%, avg=742.40, stdev=32.17, samples=20 00:37:02.366 iops : min= 176, max= 192, avg=185.60, stdev= 8.04, samples=20 00:37:02.366 lat (usec) : 1000=11.29% 00:37:02.366 lat (msec) : 2=38.60%, 50=50.11% 00:37:02.366 cpu : usr=91.30%, sys=8.22%, ctx=28, majf=0, minf=1636 00:37:02.366 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:02.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.366 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.366 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:02.366 00:37:02.366 Run status group 0 (all jobs): 00:37:02.366 READ: bw=743KiB/s (761kB/s), 743KiB/s-743KiB/s (761kB/s-761kB/s), io=7440KiB (7619kB), run=10017-10017msec 00:37:02.366 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:37:02.366 ----------------------------------------------------- 00:37:02.366 Suppressions used: 00:37:02.366 count bytes template 00:37:02.366 1 8 /usr/src/fio/parse.c 00:37:02.366 1 8 libtcmalloc_minimal.so 00:37:02.366 1 904 libcrypto.so 00:37:02.366 ----------------------------------------------------- 00:37:02.366 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.367 00:37:02.367 real 0m12.319s 00:37:02.367 user 0m11.270s 00:37:02.367 sys 0m1.260s 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:02.367 ************************************ 00:37:02.367 END TEST fio_dif_1_default 00:37:02.367 ************************************ 00:37:02.367 06:29:13 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:37:02.367 06:29:13 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:02.367 06:29:13 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:02.367 06:29:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:02.367 ************************************ 00:37:02.367 START TEST fio_dif_1_multi_subsystems 00:37:02.367 ************************************ 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:02.367 bdev_null0 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:02.367 [2024-07-26 06:29:13.438422] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:02.367 bdev_null1 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:02.367 { 00:37:02.367 "params": { 00:37:02.367 "name": "Nvme$subsystem", 00:37:02.367 "trtype": "$TEST_TRANSPORT", 00:37:02.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:02.367 "adrfam": "ipv4", 00:37:02.367 "trsvcid": "$NVMF_PORT", 00:37:02.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:02.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:02.367 "hdgst": ${hdgst:-false}, 00:37:02.367 "ddgst": ${ddgst:-false} 00:37:02.367 }, 00:37:02.367 "method": "bdev_nvme_attach_controller" 00:37:02.367 } 00:37:02.367 EOF 00:37:02.367 )") 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:02.367 { 00:37:02.367 "params": { 00:37:02.367 "name": "Nvme$subsystem", 00:37:02.367 "trtype": "$TEST_TRANSPORT", 00:37:02.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:02.367 "adrfam": "ipv4", 00:37:02.367 "trsvcid": "$NVMF_PORT", 00:37:02.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:02.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:02.367 "hdgst": ${hdgst:-false}, 00:37:02.367 "ddgst": ${ddgst:-false} 00:37:02.367 }, 00:37:02.367 "method": "bdev_nvme_attach_controller" 00:37:02.367 } 00:37:02.367 EOF 00:37:02.367 )") 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:02.367 "params": { 00:37:02.367 "name": "Nvme0", 00:37:02.367 "trtype": "tcp", 00:37:02.367 "traddr": "10.0.0.2", 00:37:02.367 "adrfam": "ipv4", 00:37:02.367 "trsvcid": "4420", 00:37:02.367 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:02.367 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:02.367 "hdgst": false, 00:37:02.367 "ddgst": false 00:37:02.367 }, 00:37:02.367 "method": "bdev_nvme_attach_controller" 00:37:02.367 },{ 00:37:02.367 "params": { 00:37:02.367 "name": "Nvme1", 00:37:02.367 "trtype": "tcp", 00:37:02.367 "traddr": "10.0.0.2", 00:37:02.367 "adrfam": "ipv4", 00:37:02.367 "trsvcid": "4420", 00:37:02.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:02.367 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:02.367 "hdgst": false, 00:37:02.367 "ddgst": false 00:37:02.367 }, 00:37:02.367 "method": "bdev_nvme_attach_controller" 00:37:02.367 }' 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # break 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:02.367 06:29:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:02.624 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:02.624 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:02.624 fio-3.35 00:37:02.624 Starting 2 threads 00:37:02.624 EAL: No free 2048 kB hugepages reported on node 1 00:37:14.828 00:37:14.828 filename0: (groupid=0, jobs=1): err= 0: pid=325023: Fri Jul 26 06:29:24 2024 00:37:14.828 read: IOPS=187, BW=749KiB/s (767kB/s)(7488KiB/10001msec) 00:37:14.828 slat (nsec): min=6258, max=64451, avg=15749.06, stdev=6969.63 00:37:14.828 clat (usec): min=815, max=42975, avg=21320.89, stdev=20314.54 00:37:14.828 lat (usec): min=826, max=43031, avg=21336.64, stdev=20313.58 00:37:14.828 clat percentiles (usec): 00:37:14.828 | 1.00th=[ 824], 5.00th=[ 840], 10.00th=[ 857], 20.00th=[ 889], 00:37:14.828 | 30.00th=[ 922], 40.00th=[ 1020], 50.00th=[41157], 60.00th=[41157], 00:37:14.828 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:37:14.828 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:37:14.829 | 99.99th=[42730] 00:37:14.829 bw ( KiB/s): min= 672, max= 768, per=66.22%, avg=749.47, stdev=30.76, samples=19 00:37:14.829 iops : min= 168, max= 192, avg=187.37, stdev= 7.69, samples=19 00:37:14.829 lat (usec) : 1000=39.05% 00:37:14.829 lat (msec) : 2=10.74%, 50=50.21% 00:37:14.829 cpu : usr=94.96%, sys=4.53%, ctx=13, majf=0, minf=1637 00:37:14.829 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:14.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.829 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.829 issued rwts: total=1872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.829 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:14.829 filename1: (groupid=0, jobs=1): err= 0: pid=325024: Fri Jul 26 06:29:24 2024 00:37:14.829 read: IOPS=96, BW=384KiB/s (394kB/s)(3856KiB/10029msec) 00:37:14.829 slat (nsec): min=5320, max=85783, avg=16923.09, stdev=9322.82 00:37:14.829 clat (usec): min=40851, max=42950, avg=41560.19, stdev=504.33 00:37:14.829 lat (usec): min=40864, max=42986, avg=41577.12, stdev=505.43 00:37:14.829 clat percentiles (usec): 00:37:14.829 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:14.829 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:37:14.829 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:14.829 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:37:14.829 | 99.99th=[42730] 00:37:14.829 bw ( KiB/s): min= 352, max= 416, per=33.95%, avg=384.00, stdev=10.38, samples=20 00:37:14.829 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:37:14.829 lat (msec) : 50=100.00% 00:37:14.829 cpu : usr=95.06%, sys=4.30%, ctx=19, majf=0, minf=1640 00:37:14.829 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:14.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.829 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.829 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.829 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:14.829 00:37:14.829 Run status group 0 (all jobs): 00:37:14.829 READ: bw=1131KiB/s (1158kB/s), 384KiB/s-749KiB/s (394kB/s-767kB/s), io=11.1MiB (11.6MB), run=10001-10029msec 00:37:14.829 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:37:14.829 ----------------------------------------------------- 00:37:14.829 Suppressions used: 00:37:14.829 count bytes template 00:37:14.829 2 16 /usr/src/fio/parse.c 00:37:14.829 1 8 libtcmalloc_minimal.so 00:37:14.829 1 904 libcrypto.so 00:37:14.829 ----------------------------------------------------- 00:37:14.829 00:37:14.829 06:29:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:14.829 06:29:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:37:14.829 06:29:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:14.829 06:29:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:14.829 06:29:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:37:14.829 06:29:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:14.829 06:29:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.829 06:29:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:14.829 06:29:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.829 06:29:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:14.829 06:29:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.829 06:29:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:14.829 06:29:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.829 06:29:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:14.829 06:29:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:14.829 06:29:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:37:14.829 06:29:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:14.829 06:29:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.829 06:29:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:14.829 06:29:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.829 06:29:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:14.829 06:29:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.829 06:29:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:14.829 06:29:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.829 00:37:14.829 real 0m12.523s 00:37:14.829 user 0m21.421s 00:37:14.829 sys 0m1.386s 00:37:14.829 06:29:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:14.829 06:29:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:14.829 ************************************ 00:37:14.829 END TEST fio_dif_1_multi_subsystems 00:37:14.829 ************************************ 00:37:14.829 06:29:25 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:14.829 06:29:25 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:14.829 06:29:25 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:14.829 06:29:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:14.829 ************************************ 00:37:14.829 START TEST fio_dif_rand_params 00:37:14.829 ************************************ 00:37:14.829 06:29:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:37:14.829 06:29:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:37:14.829 06:29:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:14.829 06:29:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:37:14.829 06:29:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:37:14.829 06:29:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:37:14.829 06:29:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:37:14.829 06:29:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:37:14.829 06:29:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:37:14.829 06:29:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:14.829 06:29:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:14.829 06:29:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:14.829 06:29:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:14.829 06:29:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:14.829 06:29:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.829 06:29:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.829 bdev_null0 00:37:14.829 06:29:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.829 06:29:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:14.829 06:29:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.829 06:29:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.829 06:29:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.829 06:29:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:14.829 06:29:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.829 06:29:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.829 06:29:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.829 06:29:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:14.829 06:29:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.829 06:29:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:14.829 [2024-07-26 06:29:26.009647] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:14.829 06:29:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.829 06:29:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:14.829 06:29:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:14.829 06:29:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:14.829 06:29:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:14.829 06:29:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:14.829 06:29:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:14.829 06:29:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:14.829 { 00:37:14.829 "params": { 00:37:14.829 "name": "Nvme$subsystem", 00:37:14.829 "trtype": "$TEST_TRANSPORT", 00:37:14.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:14.829 "adrfam": "ipv4", 00:37:14.829 "trsvcid": "$NVMF_PORT", 00:37:14.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:14.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:14.829 "hdgst": ${hdgst:-false}, 00:37:14.829 "ddgst": ${ddgst:-false} 00:37:14.829 }, 00:37:14.829 "method": "bdev_nvme_attach_controller" 00:37:14.829 } 00:37:14.829 EOF 00:37:14.829 )") 00:37:14.829 06:29:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:14.829 06:29:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:14.829 06:29:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:14.829 06:29:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:14.829 06:29:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:14.829 06:29:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:14.830 06:29:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:14.830 06:29:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:14.830 06:29:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:14.830 06:29:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:37:14.830 06:29:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:14.830 06:29:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:14.830 06:29:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:14.830 06:29:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:14.830 06:29:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:14.830 06:29:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:14.830 06:29:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:37:14.830 06:29:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:14.830 06:29:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:14.830 06:29:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:14.830 06:29:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:14.830 "params": { 00:37:14.830 "name": "Nvme0", 00:37:14.830 "trtype": "tcp", 00:37:14.830 "traddr": "10.0.0.2", 00:37:14.830 "adrfam": "ipv4", 00:37:14.830 "trsvcid": "4420", 00:37:14.830 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:14.830 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:14.830 "hdgst": false, 00:37:14.830 "ddgst": false 00:37:14.830 }, 00:37:14.830 "method": "bdev_nvme_attach_controller" 00:37:14.830 }' 00:37:14.830 06:29:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:14.830 06:29:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:14.830 06:29:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:37:14.830 06:29:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:14.830 06:29:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:15.089 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:15.089 ... 00:37:15.089 fio-3.35 00:37:15.089 Starting 3 threads 00:37:15.089 EAL: No free 2048 kB hugepages reported on node 1 00:37:21.648 00:37:21.648 filename0: (groupid=0, jobs=1): err= 0: pid=326541: Fri Jul 26 06:29:32 2024 00:37:21.648 read: IOPS=168, BW=21.1MiB/s (22.1MB/s)(106MiB/5004msec) 00:37:21.648 slat (nsec): min=7819, max=55307, avg=19575.09, stdev=5397.08 00:37:21.648 clat (usec): min=4158, max=92098, avg=17756.24, stdev=12991.57 00:37:21.648 lat (usec): min=4177, max=92117, avg=17775.82, stdev=12991.39 00:37:21.648 clat percentiles (usec): 00:37:21.648 | 1.00th=[ 6849], 5.00th=[ 7570], 10.00th=[ 9241], 20.00th=[10552], 00:37:21.648 | 30.00th=[11731], 40.00th=[13042], 50.00th=[13960], 60.00th=[15139], 00:37:21.648 | 70.00th=[16188], 80.00th=[17695], 90.00th=[47973], 95.00th=[53216], 00:37:21.648 | 99.00th=[57410], 99.50th=[58459], 99.90th=[91751], 99.95th=[91751], 00:37:21.648 | 99.99th=[91751] 00:37:21.648 bw ( KiB/s): min=13338, max=26624, per=31.46%, avg=21557.80, stdev=4282.91, samples=10 00:37:21.648 iops : min= 104, max= 208, avg=168.40, stdev=33.50, samples=10 00:37:21.648 lat (msec) : 10=15.17%, 20=72.04%, 50=3.91%, 100=8.89% 00:37:21.648 cpu : usr=92.42%, sys=6.76%, ctx=78, majf=0, minf=1637 00:37:21.648 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:21.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.648 issued rwts: total=844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.648 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:21.648 filename0: (groupid=0, jobs=1): err= 0: pid=326542: Fri Jul 26 06:29:32 2024 00:37:21.648 read: IOPS=183, BW=22.9MiB/s (24.0MB/s)(115MiB/5025msec) 00:37:21.648 slat (nsec): min=7421, max=43638, avg=18823.02, stdev=4069.38 00:37:21.648 clat (usec): min=5911, max=58860, avg=16323.94, stdev=11186.72 00:37:21.648 lat (usec): min=5928, max=58877, avg=16342.76, stdev=11186.61 00:37:21.648 clat percentiles (usec): 00:37:21.648 | 1.00th=[ 6521], 5.00th=[ 7373], 10.00th=[ 8717], 20.00th=[10159], 00:37:21.648 | 30.00th=[11207], 40.00th=[12518], 50.00th=[13698], 60.00th=[14746], 00:37:21.648 | 70.00th=[15795], 80.00th=[17433], 90.00th=[20317], 95.00th=[51643], 00:37:21.648 | 99.00th=[55837], 99.50th=[57410], 99.90th=[58983], 99.95th=[58983], 00:37:21.648 | 99.99th=[58983] 00:37:21.648 bw ( KiB/s): min=17408, max=31232, per=34.33%, avg=23526.40, stdev=3714.60, samples=10 00:37:21.648 iops : min= 136, max= 244, avg=183.80, stdev=29.02, samples=10 00:37:21.648 lat (msec) : 10=18.22%, 20=71.69%, 50=3.90%, 100=6.18% 00:37:21.648 cpu : usr=92.81%, sys=6.63%, ctx=12, majf=0, minf=1638 00:37:21.648 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:21.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.648 issued rwts: total=922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.648 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:21.648 filename0: (groupid=0, jobs=1): err= 0: pid=326543: Fri Jul 26 06:29:32 2024 00:37:21.649 read: IOPS=184, BW=23.1MiB/s (24.2MB/s)(116MiB/5032msec) 00:37:21.649 slat (nsec): min=8752, max=44000, avg=18086.93, stdev=3692.31 00:37:21.649 clat (usec): min=5746, max=91921, avg=16242.93, stdev=12106.45 00:37:21.649 lat (usec): min=5784, max=91956, avg=16261.02, stdev=12106.45 00:37:21.649 clat percentiles (usec): 00:37:21.649 | 1.00th=[ 6259], 5.00th=[ 7439], 10.00th=[ 8979], 20.00th=[10028], 00:37:21.649 | 30.00th=[11076], 40.00th=[12125], 50.00th=[13173], 60.00th=[14091], 00:37:21.649 | 70.00th=[15008], 80.00th=[16581], 90.00th=[19792], 95.00th=[51643], 00:37:21.649 | 99.00th=[56886], 99.50th=[58983], 99.90th=[91751], 99.95th=[91751], 00:37:21.649 | 99.99th=[91751] 00:37:21.649 bw ( KiB/s): min=15360, max=28416, per=34.55%, avg=23674.80, stdev=3915.57, samples=10 00:37:21.649 iops : min= 120, max= 222, avg=184.90, stdev=30.56, samples=10 00:37:21.649 lat (msec) : 10=19.29%, 20=70.80%, 50=2.91%, 100=7.00% 00:37:21.649 cpu : usr=92.47%, sys=7.02%, ctx=13, majf=0, minf=1637 00:37:21.649 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:21.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.649 issued rwts: total=928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.649 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:21.649 00:37:21.649 Run status group 0 (all jobs): 00:37:21.649 READ: bw=66.9MiB/s (70.2MB/s), 21.1MiB/s-23.1MiB/s (22.1MB/s-24.2MB/s), io=337MiB (353MB), run=5004-5032msec 00:37:21.907 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:37:22.165 ----------------------------------------------------- 00:37:22.165 Suppressions used: 00:37:22.165 count bytes template 00:37:22.165 5 44 /usr/src/fio/parse.c 00:37:22.165 1 8 libtcmalloc_minimal.so 00:37:22.165 1 904 libcrypto.so 00:37:22.165 ----------------------------------------------------- 00:37:22.165 00:37:22.165 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:22.165 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:22.165 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:22.165 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:22.165 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:22.165 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:22.165 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.165 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.165 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.165 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:22.165 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.165 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.165 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.165 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.166 bdev_null0 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.166 [2024-07-26 06:29:33.371801] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.166 bdev_null1 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.166 bdev_null2 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:22.166 { 00:37:22.166 "params": { 00:37:22.166 "name": "Nvme$subsystem", 00:37:22.166 "trtype": "$TEST_TRANSPORT", 00:37:22.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:22.166 "adrfam": "ipv4", 00:37:22.166 "trsvcid": "$NVMF_PORT", 00:37:22.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:22.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:22.166 "hdgst": ${hdgst:-false}, 00:37:22.166 "ddgst": ${ddgst:-false} 00:37:22.166 }, 00:37:22.166 "method": "bdev_nvme_attach_controller" 00:37:22.166 } 00:37:22.166 EOF 00:37:22.166 )") 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:22.166 { 00:37:22.166 "params": { 00:37:22.166 "name": "Nvme$subsystem", 00:37:22.166 "trtype": "$TEST_TRANSPORT", 00:37:22.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:22.166 "adrfam": "ipv4", 00:37:22.166 "trsvcid": "$NVMF_PORT", 00:37:22.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:22.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:22.166 "hdgst": ${hdgst:-false}, 00:37:22.166 "ddgst": ${ddgst:-false} 00:37:22.166 }, 00:37:22.166 "method": "bdev_nvme_attach_controller" 00:37:22.166 } 00:37:22.166 EOF 00:37:22.166 )") 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:22.166 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:22.167 06:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:22.167 06:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:22.167 06:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:22.167 { 00:37:22.167 "params": { 00:37:22.167 "name": "Nvme$subsystem", 00:37:22.167 "trtype": "$TEST_TRANSPORT", 00:37:22.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:22.167 "adrfam": "ipv4", 00:37:22.167 "trsvcid": "$NVMF_PORT", 00:37:22.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:22.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:22.167 "hdgst": ${hdgst:-false}, 00:37:22.167 "ddgst": ${ddgst:-false} 00:37:22.167 }, 00:37:22.167 "method": "bdev_nvme_attach_controller" 00:37:22.167 } 00:37:22.167 EOF 00:37:22.167 )") 00:37:22.167 06:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:22.167 06:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:22.167 06:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:22.167 06:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:22.167 "params": { 00:37:22.167 "name": "Nvme0", 00:37:22.167 "trtype": "tcp", 00:37:22.167 "traddr": "10.0.0.2", 00:37:22.167 "adrfam": "ipv4", 00:37:22.167 "trsvcid": "4420", 00:37:22.167 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:22.167 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:22.167 "hdgst": false, 00:37:22.167 "ddgst": false 00:37:22.167 }, 00:37:22.167 "method": "bdev_nvme_attach_controller" 00:37:22.167 },{ 00:37:22.167 "params": { 00:37:22.167 "name": "Nvme1", 00:37:22.167 "trtype": "tcp", 00:37:22.167 "traddr": "10.0.0.2", 00:37:22.167 "adrfam": "ipv4", 00:37:22.167 "trsvcid": "4420", 00:37:22.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:22.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:22.167 "hdgst": false, 00:37:22.167 "ddgst": false 00:37:22.167 }, 00:37:22.167 "method": "bdev_nvme_attach_controller" 00:37:22.167 },{ 00:37:22.167 "params": { 00:37:22.167 "name": "Nvme2", 00:37:22.167 "trtype": "tcp", 00:37:22.167 "traddr": "10.0.0.2", 00:37:22.167 "adrfam": "ipv4", 00:37:22.167 "trsvcid": "4420", 00:37:22.167 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:22.167 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:22.167 "hdgst": false, 00:37:22.167 "ddgst": false 00:37:22.167 }, 00:37:22.167 "method": "bdev_nvme_attach_controller" 00:37:22.167 }' 00:37:22.167 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:22.167 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:22.167 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:37:22.167 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:22.167 06:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:22.424 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:22.425 ... 00:37:22.425 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:22.425 ... 00:37:22.425 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:22.425 ... 00:37:22.425 fio-3.35 00:37:22.425 Starting 24 threads 00:37:22.683 EAL: No free 2048 kB hugepages reported on node 1 00:37:34.883 00:37:34.883 filename0: (groupid=0, jobs=1): err= 0: pid=327526: Fri Jul 26 06:29:45 2024 00:37:34.883 read: IOPS=367, BW=1470KiB/s (1505kB/s)(14.4MiB/10030msec) 00:37:34.883 slat (nsec): min=8571, max=81543, avg=27922.25, stdev=10672.55 00:37:34.883 clat (usec): min=4217, max=60134, avg=43290.88, stdev=4295.18 00:37:34.883 lat (usec): min=4238, max=60181, avg=43318.80, stdev=4294.40 00:37:34.883 clat percentiles (usec): 00:37:34.883 | 1.00th=[13829], 5.00th=[42206], 10.00th=[42730], 20.00th=[43254], 00:37:34.883 | 30.00th=[43254], 40.00th=[43779], 50.00th=[43779], 60.00th=[43779], 00:37:34.883 | 70.00th=[44303], 80.00th=[44303], 90.00th=[45351], 95.00th=[45876], 00:37:34.883 | 99.00th=[47449], 99.50th=[51119], 99.90th=[60031], 99.95th=[60031], 00:37:34.883 | 99.99th=[60031] 00:37:34.883 bw ( KiB/s): min= 1405, max= 1840, per=4.27%, avg=1467.85, stdev=105.95, samples=20 00:37:34.883 iops : min= 351, max= 460, avg=366.95, stdev=26.50, samples=20 00:37:34.883 lat (msec) : 10=0.38%, 20=0.92%, 50=98.16%, 100=0.54% 00:37:34.883 cpu : usr=93.28%, sys=3.53%, ctx=284, majf=0, minf=1634 00:37:34.883 IO depths : 1=6.0%, 2=12.1%, 4=24.3%, 8=51.1%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:34.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.883 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.883 issued rwts: total=3686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.883 filename0: (groupid=0, jobs=1): err= 0: pid=327527: Fri Jul 26 06:29:45 2024 00:37:34.883 read: IOPS=358, BW=1434KiB/s (1468kB/s)(14.1MiB/10045msec) 00:37:34.883 slat (nsec): min=6012, max=94593, avg=34389.04, stdev=12378.93 00:37:34.883 clat (usec): min=29250, max=95071, avg=44329.39, stdev=4625.32 00:37:34.883 lat (usec): min=29265, max=95094, avg=44363.78, stdev=4622.91 00:37:34.883 clat percentiles (usec): 00:37:34.883 | 1.00th=[41681], 5.00th=[42206], 10.00th=[42730], 20.00th=[43254], 00:37:34.883 | 30.00th=[43254], 40.00th=[43779], 50.00th=[43779], 60.00th=[43779], 00:37:34.883 | 70.00th=[44303], 80.00th=[44303], 90.00th=[45351], 95.00th=[45876], 00:37:34.883 | 99.00th=[61080], 99.50th=[85459], 99.90th=[94897], 99.95th=[94897], 00:37:34.883 | 99.99th=[94897] 00:37:34.883 bw ( KiB/s): min= 1152, max= 1536, per=4.17%, avg=1431.25, stdev=99.47, samples=20 00:37:34.883 iops : min= 288, max= 384, avg=357.80, stdev=24.88, samples=20 00:37:34.883 lat (msec) : 50=98.61%, 100=1.39% 00:37:34.883 cpu : usr=97.99%, sys=1.52%, ctx=22, majf=0, minf=1636 00:37:34.883 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:34.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.883 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.883 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.883 filename0: (groupid=0, jobs=1): err= 0: pid=327528: Fri Jul 26 06:29:45 2024 00:37:34.883 read: IOPS=359, BW=1440KiB/s (1474kB/s)(14.1MiB/10002msec) 00:37:34.883 slat (nsec): min=6128, max=75714, avg=31277.91, stdev=10373.37 00:37:34.883 clat (usec): min=41447, max=96210, avg=44178.96, stdev=3773.01 00:37:34.883 lat (usec): min=41464, max=96232, avg=44210.24, stdev=3770.87 00:37:34.883 clat percentiles (usec): 00:37:34.883 | 1.00th=[42206], 5.00th=[42206], 10.00th=[42730], 20.00th=[43254], 00:37:34.883 | 30.00th=[43779], 40.00th=[43779], 50.00th=[43779], 60.00th=[43779], 00:37:34.883 | 70.00th=[44303], 80.00th=[44303], 90.00th=[45351], 95.00th=[45876], 00:37:34.883 | 99.00th=[49021], 99.50th=[60031], 99.90th=[95945], 99.95th=[95945], 00:37:34.883 | 99.99th=[95945] 00:37:34.883 bw ( KiB/s): min= 1152, max= 1536, per=4.18%, avg=1434.95, stdev=91.30, samples=19 00:37:34.883 iops : min= 288, max= 384, avg=358.74, stdev=22.83, samples=19 00:37:34.883 lat (msec) : 50=99.11%, 100=0.89% 00:37:34.883 cpu : usr=97.30%, sys=2.16%, ctx=27, majf=0, minf=1633 00:37:34.883 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:34.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.883 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.883 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.883 filename0: (groupid=0, jobs=1): err= 0: pid=327529: Fri Jul 26 06:29:45 2024 00:37:34.883 read: IOPS=364, BW=1460KiB/s (1495kB/s)(14.4MiB/10083msec) 00:37:34.883 slat (nsec): min=8294, max=83028, avg=25563.25, stdev=12759.98 00:37:34.883 clat (usec): min=9569, max=84454, avg=43608.22, stdev=5039.97 00:37:34.883 lat (usec): min=9599, max=84493, avg=43633.79, stdev=5039.25 00:37:34.883 clat percentiles (usec): 00:37:34.883 | 1.00th=[11994], 5.00th=[42206], 10.00th=[42730], 20.00th=[43254], 00:37:34.883 | 30.00th=[43779], 40.00th=[43779], 50.00th=[43779], 60.00th=[43779], 00:37:34.883 | 70.00th=[44303], 80.00th=[44303], 90.00th=[45351], 95.00th=[45876], 00:37:34.883 | 99.00th=[47449], 99.50th=[61080], 99.90th=[84411], 99.95th=[84411], 00:37:34.883 | 99.99th=[84411] 00:37:34.883 bw ( KiB/s): min= 1408, max= 1792, per=4.27%, avg=1465.70, stdev=97.11, samples=20 00:37:34.883 iops : min= 352, max= 448, avg=366.40, stdev=24.29, samples=20 00:37:34.883 lat (msec) : 10=0.19%, 20=1.11%, 50=97.72%, 100=0.98% 00:37:34.883 cpu : usr=96.92%, sys=2.47%, ctx=35, majf=0, minf=1637 00:37:34.883 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:34.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.883 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.883 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.883 filename0: (groupid=0, jobs=1): err= 0: pid=327530: Fri Jul 26 06:29:45 2024 00:37:34.883 read: IOPS=362, BW=1451KiB/s (1486kB/s)(14.3MiB/10075msec) 00:37:34.883 slat (nsec): min=10683, max=80540, avg=31671.78, stdev=10262.83 00:37:34.883 clat (msec): min=21, max=123, avg=43.79, stdev= 6.20 00:37:34.883 lat (msec): min=21, max=123, avg=43.82, stdev= 6.20 00:37:34.883 clat percentiles (msec): 00:37:34.884 | 1.00th=[ 30], 5.00th=[ 39], 10.00th=[ 43], 20.00th=[ 43], 00:37:34.884 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:37:34.884 | 70.00th=[ 44], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:37:34.884 | 99.00th=[ 66], 99.50th=[ 72], 99.90th=[ 115], 99.95th=[ 125], 00:37:34.884 | 99.99th=[ 125] 00:37:34.884 bw ( KiB/s): min= 1280, max= 1676, per=4.24%, avg=1455.00, stdev=99.99, samples=20 00:37:34.884 iops : min= 320, max= 419, avg=363.75, stdev=25.00, samples=20 00:37:34.884 lat (msec) : 50=97.92%, 100=1.64%, 250=0.44% 00:37:34.884 cpu : usr=96.53%, sys=1.95%, ctx=76, majf=0, minf=1636 00:37:34.884 IO depths : 1=5.7%, 2=11.5%, 4=23.5%, 8=52.4%, 16=7.0%, 32=0.0%, >=64=0.0% 00:37:34.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.884 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.884 issued rwts: total=3654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.884 filename0: (groupid=0, jobs=1): err= 0: pid=327531: Fri Jul 26 06:29:45 2024 00:37:34.884 read: IOPS=359, BW=1437KiB/s (1471kB/s)(14.2MiB/10113msec) 00:37:34.884 slat (nsec): min=8480, max=75884, avg=21143.32, stdev=7600.49 00:37:34.884 clat (msec): min=20, max=114, avg=44.36, stdev= 5.60 00:37:34.884 lat (msec): min=20, max=114, avg=44.38, stdev= 5.60 00:37:34.884 clat percentiles (msec): 00:37:34.884 | 1.00th=[ 30], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 44], 00:37:34.884 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:37:34.884 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 47], 00:37:34.884 | 99.00th=[ 63], 99.50th=[ 68], 99.90th=[ 114], 99.95th=[ 114], 00:37:34.884 | 99.99th=[ 114] 00:37:34.884 bw ( KiB/s): min= 1408, max= 1536, per=4.21%, avg=1446.40, stdev=60.18, samples=20 00:37:34.884 iops : min= 352, max= 384, avg=361.60, stdev=15.05, samples=20 00:37:34.884 lat (msec) : 50=97.58%, 100=1.98%, 250=0.44% 00:37:34.884 cpu : usr=98.05%, sys=1.46%, ctx=13, majf=0, minf=1637 00:37:34.884 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:34.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.884 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.884 issued rwts: total=3632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.884 filename0: (groupid=0, jobs=1): err= 0: pid=327532: Fri Jul 26 06:29:45 2024 00:37:34.884 read: IOPS=357, BW=1430KiB/s (1464kB/s)(14.1MiB/10070msec) 00:37:34.884 slat (nsec): min=12355, max=68134, avg=36023.52, stdev=9583.65 00:37:34.884 clat (msec): min=41, max=110, avg=44.43, stdev= 5.92 00:37:34.884 lat (msec): min=41, max=110, avg=44.47, stdev= 5.92 00:37:34.884 clat percentiles (msec): 00:37:34.884 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 44], 00:37:34.884 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:37:34.884 | 70.00th=[ 44], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:37:34.884 | 99.00th=[ 61], 99.50th=[ 100], 99.90th=[ 111], 99.95th=[ 111], 00:37:34.884 | 99.99th=[ 111] 00:37:34.884 bw ( KiB/s): min= 1152, max= 1536, per=4.17%, avg=1433.45, stdev=98.32, samples=20 00:37:34.884 iops : min= 288, max= 384, avg=358.35, stdev=24.58, samples=20 00:37:34.884 lat (msec) : 50=98.67%, 100=0.89%, 250=0.44% 00:37:34.884 cpu : usr=97.95%, sys=1.57%, ctx=13, majf=0, minf=1636 00:37:34.884 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:34.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.884 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.884 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.884 filename0: (groupid=0, jobs=1): err= 0: pid=327533: Fri Jul 26 06:29:45 2024 00:37:34.884 read: IOPS=357, BW=1430KiB/s (1465kB/s)(14.1MiB/10067msec) 00:37:34.884 slat (nsec): min=12544, max=93721, avg=41695.83, stdev=15973.36 00:37:34.884 clat (msec): min=41, max=110, avg=44.35, stdev= 5.85 00:37:34.884 lat (msec): min=41, max=110, avg=44.39, stdev= 5.85 00:37:34.884 clat percentiles (msec): 00:37:34.884 | 1.00th=[ 42], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:37:34.884 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:37:34.884 | 70.00th=[ 44], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:37:34.884 | 99.00th=[ 61], 99.50th=[ 97], 99.90th=[ 111], 99.95th=[ 111], 00:37:34.884 | 99.99th=[ 111] 00:37:34.884 bw ( KiB/s): min= 1152, max= 1536, per=4.17%, avg=1432.30, stdev=98.77, samples=20 00:37:34.884 iops : min= 288, max= 384, avg=358.05, stdev=24.70, samples=20 00:37:34.884 lat (msec) : 50=98.67%, 100=0.89%, 250=0.44% 00:37:34.884 cpu : usr=98.10%, sys=1.41%, ctx=19, majf=0, minf=1636 00:37:34.884 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:34.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.884 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.884 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.884 filename1: (groupid=0, jobs=1): err= 0: pid=327534: Fri Jul 26 06:29:45 2024 00:37:34.884 read: IOPS=358, BW=1434KiB/s (1468kB/s)(14.2MiB/10114msec) 00:37:34.884 slat (usec): min=8, max=104, avg=62.49, stdev=12.29 00:37:34.884 clat (msec): min=20, max=113, avg=43.84, stdev= 4.36 00:37:34.884 lat (msec): min=20, max=113, avg=43.90, stdev= 4.36 00:37:34.884 clat percentiles (msec): 00:37:34.884 | 1.00th=[ 42], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:37:34.884 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:37:34.884 | 70.00th=[ 44], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:37:34.884 | 99.00th=[ 56], 99.50th=[ 69], 99.90th=[ 114], 99.95th=[ 114], 00:37:34.884 | 99.99th=[ 114] 00:37:34.884 bw ( KiB/s): min= 1408, max= 1536, per=4.21%, avg=1446.40, stdev=60.18, samples=20 00:37:34.884 iops : min= 352, max= 384, avg=361.60, stdev=15.05, samples=20 00:37:34.884 lat (msec) : 50=98.43%, 100=1.32%, 250=0.25% 00:37:34.884 cpu : usr=97.88%, sys=1.56%, ctx=16, majf=0, minf=1635 00:37:34.884 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=49.9%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:34.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.884 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.884 issued rwts: total=3625,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.884 filename1: (groupid=0, jobs=1): err= 0: pid=327535: Fri Jul 26 06:29:45 2024 00:37:34.884 read: IOPS=357, BW=1430KiB/s (1464kB/s)(14.1MiB/10072msec) 00:37:34.884 slat (nsec): min=12138, max=95240, avg=36413.01, stdev=12987.60 00:37:34.884 clat (msec): min=41, max=113, avg=44.42, stdev= 5.89 00:37:34.884 lat (msec): min=41, max=113, avg=44.45, stdev= 5.89 00:37:34.884 clat percentiles (msec): 00:37:34.884 | 1.00th=[ 42], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 44], 00:37:34.884 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:37:34.884 | 70.00th=[ 44], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:37:34.884 | 99.00th=[ 61], 99.50th=[ 99], 99.90th=[ 111], 99.95th=[ 114], 00:37:34.884 | 99.99th=[ 114] 00:37:34.884 bw ( KiB/s): min= 1154, max= 1536, per=4.17%, avg=1433.55, stdev=98.02, samples=20 00:37:34.884 iops : min= 288, max= 384, avg=358.35, stdev=24.58, samples=20 00:37:34.884 lat (msec) : 50=98.67%, 100=0.89%, 250=0.44% 00:37:34.884 cpu : usr=98.07%, sys=1.43%, ctx=28, majf=0, minf=1634 00:37:34.884 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:34.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.884 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.884 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.884 filename1: (groupid=0, jobs=1): err= 0: pid=327536: Fri Jul 26 06:29:45 2024 00:37:34.884 read: IOPS=361, BW=1448KiB/s (1483kB/s)(14.2MiB/10078msec) 00:37:34.884 slat (nsec): min=8511, max=83121, avg=31109.99, stdev=12061.51 00:37:34.884 clat (usec): min=18786, max=84554, avg=43915.28, stdev=3511.30 00:37:34.884 lat (usec): min=18831, max=84596, avg=43946.39, stdev=3512.41 00:37:34.884 clat percentiles (usec): 00:37:34.884 | 1.00th=[33162], 5.00th=[42730], 10.00th=[42730], 20.00th=[43254], 00:37:34.884 | 30.00th=[43254], 40.00th=[43779], 50.00th=[43779], 60.00th=[43779], 00:37:34.884 | 70.00th=[43779], 80.00th=[44303], 90.00th=[45351], 95.00th=[45876], 00:37:34.884 | 99.00th=[47449], 99.50th=[61080], 99.90th=[84411], 99.95th=[84411], 00:37:34.884 | 99.99th=[84411] 00:37:34.884 bw ( KiB/s): min= 1408, max= 1536, per=4.23%, avg=1452.80, stdev=62.64, samples=20 00:37:34.884 iops : min= 352, max= 384, avg=363.20, stdev=15.66, samples=20 00:37:34.884 lat (msec) : 20=0.05%, 50=99.07%, 100=0.88% 00:37:34.884 cpu : usr=95.78%, sys=2.53%, ctx=76, majf=0, minf=1637 00:37:34.884 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:34.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.884 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.884 issued rwts: total=3648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.884 filename1: (groupid=0, jobs=1): err= 0: pid=327537: Fri Jul 26 06:29:45 2024 00:37:34.884 read: IOPS=358, BW=1436KiB/s (1470kB/s)(14.1MiB/10028msec) 00:37:34.884 slat (usec): min=11, max=160, avg=33.09, stdev=11.78 00:37:34.884 clat (usec): min=29488, max=85246, avg=44247.93, stdev=3874.22 00:37:34.884 lat (usec): min=29505, max=85283, avg=44281.03, stdev=3873.19 00:37:34.884 clat percentiles (usec): 00:37:34.884 | 1.00th=[42206], 5.00th=[42206], 10.00th=[42730], 20.00th=[43254], 00:37:34.884 | 30.00th=[43254], 40.00th=[43779], 50.00th=[43779], 60.00th=[43779], 00:37:34.884 | 70.00th=[43779], 80.00th=[44303], 90.00th=[45351], 95.00th=[45876], 00:37:34.884 | 99.00th=[61080], 99.50th=[78119], 99.90th=[85459], 99.95th=[85459], 00:37:34.884 | 99.99th=[85459] 00:37:34.884 bw ( KiB/s): min= 1280, max= 1536, per=4.17%, avg=1433.60, stdev=78.80, samples=20 00:37:34.884 iops : min= 320, max= 384, avg=358.40, stdev=19.70, samples=20 00:37:34.884 lat (msec) : 50=98.61%, 100=1.39% 00:37:34.884 cpu : usr=94.99%, sys=2.85%, ctx=63, majf=0, minf=1634 00:37:34.884 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:34.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.885 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.885 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.885 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.885 filename1: (groupid=0, jobs=1): err= 0: pid=327538: Fri Jul 26 06:29:45 2024 00:37:34.885 read: IOPS=359, BW=1436KiB/s (1471kB/s)(14.2MiB/10108msec) 00:37:34.885 slat (nsec): min=11698, max=90665, avg=27629.61, stdev=10831.01 00:37:34.885 clat (msec): min=19, max=114, avg=44.27, stdev= 5.06 00:37:34.885 lat (msec): min=19, max=114, avg=44.30, stdev= 5.06 00:37:34.885 clat percentiles (msec): 00:37:34.885 | 1.00th=[ 42], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 44], 00:37:34.885 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:37:34.885 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:37:34.885 | 99.00th=[ 58], 99.50th=[ 63], 99.90th=[ 115], 99.95th=[ 115], 00:37:34.885 | 99.99th=[ 115] 00:37:34.885 bw ( KiB/s): min= 1408, max= 1536, per=4.21%, avg=1446.05, stdev=56.76, samples=20 00:37:34.885 iops : min= 352, max= 384, avg=361.50, stdev=14.20, samples=20 00:37:34.885 lat (msec) : 20=0.06%, 50=97.96%, 100=1.60%, 250=0.39% 00:37:34.885 cpu : usr=98.20%, sys=1.33%, ctx=23, majf=0, minf=1636 00:37:34.885 IO depths : 1=3.8%, 2=10.1%, 4=25.0%, 8=52.5%, 16=8.7%, 32=0.0%, >=64=0.0% 00:37:34.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.885 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.885 issued rwts: total=3630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.885 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.885 filename1: (groupid=0, jobs=1): err= 0: pid=327539: Fri Jul 26 06:29:45 2024 00:37:34.885 read: IOPS=361, BW=1445KiB/s (1480kB/s)(14.2MiB/10075msec) 00:37:34.885 slat (nsec): min=8243, max=70629, avg=29736.37, stdev=10749.15 00:37:34.885 clat (msec): min=22, max=123, avg=43.99, stdev= 6.11 00:37:34.885 lat (msec): min=22, max=123, avg=44.02, stdev= 6.11 00:37:34.885 clat percentiles (msec): 00:37:34.885 | 1.00th=[ 30], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:37:34.885 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:37:34.885 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:37:34.885 | 99.00th=[ 67], 99.50th=[ 72], 99.90th=[ 115], 99.95th=[ 125], 00:37:34.885 | 99.99th=[ 125] 00:37:34.885 bw ( KiB/s): min= 1280, max= 1616, per=4.22%, avg=1449.45, stdev=85.32, samples=20 00:37:34.885 iops : min= 320, max= 404, avg=362.35, stdev=21.33, samples=20 00:37:34.885 lat (msec) : 50=97.64%, 100=1.92%, 250=0.44% 00:37:34.885 cpu : usr=95.49%, sys=2.73%, ctx=96, majf=0, minf=1634 00:37:34.885 IO depths : 1=4.7%, 2=9.5%, 4=19.4%, 8=57.4%, 16=9.0%, 32=0.0%, >=64=0.0% 00:37:34.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.885 complete : 0=0.0%, 4=92.9%, 8=2.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.885 issued rwts: total=3640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.885 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.885 filename1: (groupid=0, jobs=1): err= 0: pid=327540: Fri Jul 26 06:29:45 2024 00:37:34.885 read: IOPS=357, BW=1432KiB/s (1466kB/s)(14.1MiB/10102msec) 00:37:34.885 slat (nsec): min=10164, max=71439, avg=29972.54, stdev=9494.64 00:37:34.885 clat (msec): min=25, max=123, avg=44.37, stdev= 5.19 00:37:34.885 lat (msec): min=25, max=123, avg=44.40, stdev= 5.19 00:37:34.885 clat percentiles (msec): 00:37:34.885 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 44], 00:37:34.885 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:37:34.885 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:37:34.885 | 99.00th=[ 56], 99.50th=[ 67], 99.90th=[ 115], 99.95th=[ 125], 00:37:34.885 | 99.99th=[ 125] 00:37:34.885 bw ( KiB/s): min= 1277, max= 1536, per=4.19%, avg=1439.85, stdev=70.78, samples=20 00:37:34.885 iops : min= 319, max= 384, avg=359.95, stdev=17.72, samples=20 00:37:34.885 lat (msec) : 50=98.17%, 100=1.38%, 250=0.44% 00:37:34.885 cpu : usr=95.54%, sys=2.60%, ctx=840, majf=0, minf=1636 00:37:34.885 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:34.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.885 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.885 issued rwts: total=3616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.885 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.885 filename1: (groupid=0, jobs=1): err= 0: pid=327541: Fri Jul 26 06:29:45 2024 00:37:34.885 read: IOPS=357, BW=1429KiB/s (1464kB/s)(14.1MiB/10075msec) 00:37:34.885 slat (usec): min=12, max=152, avg=60.47, stdev=10.03 00:37:34.885 clat (msec): min=33, max=112, avg=44.21, stdev= 6.02 00:37:34.885 lat (msec): min=33, max=112, avg=44.27, stdev= 6.02 00:37:34.885 clat percentiles (msec): 00:37:34.885 | 1.00th=[ 42], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:37:34.885 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:37:34.885 | 70.00th=[ 44], 80.00th=[ 45], 90.00th=[ 45], 95.00th=[ 46], 00:37:34.885 | 99.00th=[ 61], 99.50th=[ 102], 99.90th=[ 110], 99.95th=[ 113], 00:37:34.885 | 99.99th=[ 113] 00:37:34.885 bw ( KiB/s): min= 1152, max= 1536, per=4.17%, avg=1433.45, stdev=89.12, samples=20 00:37:34.885 iops : min= 288, max= 384, avg=358.35, stdev=22.28, samples=20 00:37:34.885 lat (msec) : 50=98.61%, 100=0.50%, 250=0.89% 00:37:34.885 cpu : usr=97.98%, sys=1.47%, ctx=12, majf=0, minf=1634 00:37:34.885 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:34.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.885 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.885 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.885 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.885 filename2: (groupid=0, jobs=1): err= 0: pid=327542: Fri Jul 26 06:29:45 2024 00:37:34.885 read: IOPS=359, BW=1436KiB/s (1471kB/s)(14.1MiB/10026msec) 00:37:34.885 slat (usec): min=4, max=102, avg=38.87, stdev=15.40 00:37:34.885 clat (usec): min=29622, max=85253, avg=44180.65, stdev=3887.34 00:37:34.885 lat (usec): min=29661, max=85291, avg=44219.52, stdev=3884.56 00:37:34.885 clat percentiles (usec): 00:37:34.885 | 1.00th=[41681], 5.00th=[42206], 10.00th=[42730], 20.00th=[43254], 00:37:34.885 | 30.00th=[43254], 40.00th=[43779], 50.00th=[43779], 60.00th=[43779], 00:37:34.885 | 70.00th=[43779], 80.00th=[44303], 90.00th=[45351], 95.00th=[45876], 00:37:34.885 | 99.00th=[61080], 99.50th=[76022], 99.90th=[85459], 99.95th=[85459], 00:37:34.885 | 99.99th=[85459] 00:37:34.885 bw ( KiB/s): min= 1280, max= 1536, per=4.17%, avg=1433.70, stdev=78.59, samples=20 00:37:34.885 iops : min= 320, max= 384, avg=358.40, stdev=19.70, samples=20 00:37:34.885 lat (msec) : 50=98.61%, 100=1.39% 00:37:34.885 cpu : usr=93.18%, sys=3.72%, ctx=485, majf=0, minf=1636 00:37:34.885 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:34.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.885 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.885 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.885 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.885 filename2: (groupid=0, jobs=1): err= 0: pid=327543: Fri Jul 26 06:29:45 2024 00:37:34.885 read: IOPS=357, BW=1430KiB/s (1464kB/s)(14.1MiB/10070msec) 00:37:34.885 slat (nsec): min=13139, max=82614, avg=31128.76, stdev=9689.12 00:37:34.885 clat (msec): min=41, max=110, avg=44.46, stdev= 5.92 00:37:34.885 lat (msec): min=41, max=110, avg=44.50, stdev= 5.92 00:37:34.885 clat percentiles (msec): 00:37:34.885 | 1.00th=[ 42], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 44], 00:37:34.885 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:37:34.885 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:37:34.885 | 99.00th=[ 61], 99.50th=[ 100], 99.90th=[ 111], 99.95th=[ 111], 00:37:34.885 | 99.99th=[ 111] 00:37:34.885 bw ( KiB/s): min= 1152, max= 1536, per=4.17%, avg=1433.45, stdev=98.32, samples=20 00:37:34.885 iops : min= 288, max= 384, avg=358.35, stdev=24.58, samples=20 00:37:34.885 lat (msec) : 50=98.67%, 100=0.89%, 250=0.44% 00:37:34.885 cpu : usr=95.81%, sys=2.47%, ctx=92, majf=0, minf=1636 00:37:34.885 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:34.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.885 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.885 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.885 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.885 filename2: (groupid=0, jobs=1): err= 0: pid=327544: Fri Jul 26 06:29:45 2024 00:37:34.885 read: IOPS=361, BW=1447KiB/s (1482kB/s)(14.3MiB/10122msec) 00:37:34.885 slat (nsec): min=8982, max=97592, avg=35189.02, stdev=18850.69 00:37:34.885 clat (msec): min=10, max=128, avg=43.86, stdev= 7.00 00:37:34.885 lat (msec): min=10, max=128, avg=43.89, stdev= 7.00 00:37:34.885 clat percentiles (msec): 00:37:34.885 | 1.00th=[ 28], 5.00th=[ 42], 10.00th=[ 43], 20.00th=[ 43], 00:37:34.885 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:37:34.885 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 47], 00:37:34.885 | 99.00th=[ 61], 99.50th=[ 62], 99.90th=[ 129], 99.95th=[ 129], 00:37:34.885 | 99.99th=[ 129] 00:37:34.885 bw ( KiB/s): min= 1392, max= 1664, per=4.25%, avg=1459.20, stdev=76.75, samples=20 00:37:34.885 iops : min= 348, max= 416, avg=364.80, stdev=19.19, samples=20 00:37:34.885 lat (msec) : 20=0.87%, 50=95.90%, 100=2.84%, 250=0.38% 00:37:34.885 cpu : usr=97.91%, sys=1.58%, ctx=24, majf=0, minf=1635 00:37:34.885 IO depths : 1=5.0%, 2=11.3%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:37:34.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.885 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.885 issued rwts: total=3662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.885 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.885 filename2: (groupid=0, jobs=1): err= 0: pid=327545: Fri Jul 26 06:29:45 2024 00:37:34.885 read: IOPS=359, BW=1437KiB/s (1472kB/s)(14.2MiB/10108msec) 00:37:34.885 slat (nsec): min=10232, max=91290, avg=33468.54, stdev=13397.98 00:37:34.885 clat (msec): min=20, max=114, avg=44.24, stdev= 5.36 00:37:34.885 lat (msec): min=20, max=114, avg=44.27, stdev= 5.36 00:37:34.885 clat percentiles (msec): 00:37:34.885 | 1.00th=[ 42], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 44], 00:37:34.886 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:37:34.886 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:37:34.886 | 99.00th=[ 59], 99.50th=[ 63], 99.90th=[ 115], 99.95th=[ 115], 00:37:34.886 | 99.99th=[ 115] 00:37:34.886 bw ( KiB/s): min= 1405, max= 1536, per=4.21%, avg=1446.05, stdev=58.38, samples=20 00:37:34.886 iops : min= 351, max= 384, avg=361.50, stdev=14.61, samples=20 00:37:34.886 lat (msec) : 50=97.91%, 100=1.65%, 250=0.44% 00:37:34.886 cpu : usr=97.97%, sys=1.54%, ctx=17, majf=0, minf=1637 00:37:34.886 IO depths : 1=5.1%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:37:34.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.886 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.886 issued rwts: total=3632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.886 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.886 filename2: (groupid=0, jobs=1): err= 0: pid=327546: Fri Jul 26 06:29:45 2024 00:37:34.886 read: IOPS=357, BW=1430KiB/s (1465kB/s)(14.1MiB/10067msec) 00:37:34.886 slat (usec): min=12, max=104, avg=39.06, stdev=15.49 00:37:34.886 clat (msec): min=41, max=110, avg=44.37, stdev= 5.84 00:37:34.886 lat (msec): min=41, max=110, avg=44.41, stdev= 5.84 00:37:34.886 clat percentiles (msec): 00:37:34.886 | 1.00th=[ 42], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 44], 00:37:34.886 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:37:34.886 | 70.00th=[ 44], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:37:34.886 | 99.00th=[ 61], 99.50th=[ 97], 99.90th=[ 111], 99.95th=[ 111], 00:37:34.886 | 99.99th=[ 111] 00:37:34.886 bw ( KiB/s): min= 1152, max= 1536, per=4.17%, avg=1432.30, stdev=98.77, samples=20 00:37:34.886 iops : min= 288, max= 384, avg=358.05, stdev=24.70, samples=20 00:37:34.886 lat (msec) : 50=98.67%, 100=0.89%, 250=0.44% 00:37:34.886 cpu : usr=98.10%, sys=1.41%, ctx=43, majf=0, minf=1636 00:37:34.886 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:34.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.886 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.886 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.886 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.886 filename2: (groupid=0, jobs=1): err= 0: pid=327547: Fri Jul 26 06:29:45 2024 00:37:34.886 read: IOPS=358, BW=1436KiB/s (1470kB/s)(14.1MiB/10028msec) 00:37:34.886 slat (nsec): min=13385, max=93244, avg=30935.77, stdev=9126.46 00:37:34.886 clat (usec): min=41826, max=85236, avg=44279.94, stdev=3833.72 00:37:34.886 lat (usec): min=41855, max=85263, avg=44310.88, stdev=3832.97 00:37:34.886 clat percentiles (usec): 00:37:34.886 | 1.00th=[42206], 5.00th=[42730], 10.00th=[42730], 20.00th=[43254], 00:37:34.886 | 30.00th=[43254], 40.00th=[43779], 50.00th=[43779], 60.00th=[43779], 00:37:34.886 | 70.00th=[44303], 80.00th=[44303], 90.00th=[45351], 95.00th=[45876], 00:37:34.886 | 99.00th=[61080], 99.50th=[78119], 99.90th=[85459], 99.95th=[85459], 00:37:34.886 | 99.99th=[85459] 00:37:34.886 bw ( KiB/s): min= 1280, max= 1536, per=4.17%, avg=1433.60, stdev=78.80, samples=20 00:37:34.886 iops : min= 320, max= 384, avg=358.40, stdev=19.70, samples=20 00:37:34.886 lat (msec) : 50=98.67%, 100=1.33% 00:37:34.886 cpu : usr=95.57%, sys=2.62%, ctx=78, majf=0, minf=1636 00:37:34.886 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:34.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.886 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.886 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.886 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.886 filename2: (groupid=0, jobs=1): err= 0: pid=327548: Fri Jul 26 06:29:45 2024 00:37:34.886 read: IOPS=357, BW=1431KiB/s (1465kB/s)(14.1MiB/10063msec) 00:37:34.886 slat (nsec): min=12083, max=88210, avg=33657.53, stdev=13011.78 00:37:34.886 clat (msec): min=41, max=114, avg=44.40, stdev= 5.54 00:37:34.886 lat (msec): min=41, max=114, avg=44.43, stdev= 5.54 00:37:34.886 clat percentiles (msec): 00:37:34.886 | 1.00th=[ 42], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 44], 00:37:34.886 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:37:34.886 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:37:34.886 | 99.00th=[ 56], 99.50th=[ 84], 99.90th=[ 115], 99.95th=[ 115], 00:37:34.886 | 99.99th=[ 115] 00:37:34.886 bw ( KiB/s): min= 1280, max= 1536, per=4.17%, avg=1433.45, stdev=78.85, samples=20 00:37:34.886 iops : min= 320, max= 384, avg=358.35, stdev=19.72, samples=20 00:37:34.886 lat (msec) : 50=98.22%, 100=1.33%, 250=0.44% 00:37:34.886 cpu : usr=94.78%, sys=2.74%, ctx=239, majf=0, minf=1636 00:37:34.886 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:34.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.886 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.886 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.886 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.886 filename2: (groupid=0, jobs=1): err= 0: pid=327549: Fri Jul 26 06:29:45 2024 00:37:34.886 read: IOPS=358, BW=1433KiB/s (1468kB/s)(14.1MiB/10047msec) 00:37:34.886 slat (nsec): min=5632, max=90398, avg=35342.35, stdev=10707.51 00:37:34.886 clat (msec): min=32, max=111, avg=44.34, stdev= 4.83 00:37:34.886 lat (msec): min=32, max=111, avg=44.37, stdev= 4.83 00:37:34.886 clat percentiles (msec): 00:37:34.886 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 44], 00:37:34.886 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:37:34.886 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:37:34.886 | 99.00th=[ 62], 99.50th=[ 86], 99.90th=[ 99], 99.95th=[ 112], 00:37:34.886 | 99.99th=[ 112] 00:37:34.886 bw ( KiB/s): min= 1152, max= 1536, per=4.17%, avg=1433.60, stdev=98.27, samples=20 00:37:34.886 iops : min= 288, max= 384, avg=358.40, stdev=24.57, samples=20 00:37:34.886 lat (msec) : 50=98.67%, 100=1.28%, 250=0.06% 00:37:34.886 cpu : usr=94.22%, sys=3.14%, ctx=197, majf=0, minf=1635 00:37:34.886 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:34.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.886 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.886 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.886 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:34.886 00:37:34.886 Run status group 0 (all jobs): 00:37:34.886 READ: bw=33.5MiB/s (35.2MB/s), 1429KiB/s-1470KiB/s (1464kB/s-1505kB/s), io=339MiB (356MB), run=10002-10122msec 00:37:35.143 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:37:35.401 ----------------------------------------------------- 00:37:35.401 Suppressions used: 00:37:35.401 count bytes template 00:37:35.401 45 402 /usr/src/fio/parse.c 00:37:35.401 1 8 libtcmalloc_minimal.so 00:37:35.401 1 904 libcrypto.so 00:37:35.401 ----------------------------------------------------- 00:37:35.401 00:37:35.401 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:35.401 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:35.401 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:35.401 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:35.401 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:35.401 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:35.401 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.401 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.401 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.401 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:35.401 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.402 bdev_null0 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.402 [2024-07-26 06:29:46.677737] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.402 bdev_null1 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:35.402 { 00:37:35.402 "params": { 00:37:35.402 "name": "Nvme$subsystem", 00:37:35.402 "trtype": "$TEST_TRANSPORT", 00:37:35.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:35.402 "adrfam": "ipv4", 00:37:35.402 "trsvcid": "$NVMF_PORT", 00:37:35.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:35.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:35.402 "hdgst": ${hdgst:-false}, 00:37:35.402 "ddgst": ${ddgst:-false} 00:37:35.402 }, 00:37:35.402 "method": "bdev_nvme_attach_controller" 00:37:35.402 } 00:37:35.402 EOF 00:37:35.402 )") 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:35.402 06:29:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:35.402 { 00:37:35.402 "params": { 00:37:35.402 "name": "Nvme$subsystem", 00:37:35.402 "trtype": "$TEST_TRANSPORT", 00:37:35.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:35.402 "adrfam": "ipv4", 00:37:35.402 "trsvcid": "$NVMF_PORT", 00:37:35.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:35.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:35.402 "hdgst": ${hdgst:-false}, 00:37:35.402 "ddgst": ${ddgst:-false} 00:37:35.402 }, 00:37:35.402 "method": "bdev_nvme_attach_controller" 00:37:35.402 } 00:37:35.402 EOF 00:37:35.403 )") 00:37:35.403 06:29:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:35.403 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:35.403 06:29:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:35.403 06:29:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:35.403 06:29:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:35.403 06:29:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:35.403 "params": { 00:37:35.403 "name": "Nvme0", 00:37:35.403 "trtype": "tcp", 00:37:35.403 "traddr": "10.0.0.2", 00:37:35.403 "adrfam": "ipv4", 00:37:35.403 "trsvcid": "4420", 00:37:35.403 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:35.403 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:35.403 "hdgst": false, 00:37:35.403 "ddgst": false 00:37:35.403 }, 00:37:35.403 "method": "bdev_nvme_attach_controller" 00:37:35.403 },{ 00:37:35.403 "params": { 00:37:35.403 "name": "Nvme1", 00:37:35.403 "trtype": "tcp", 00:37:35.403 "traddr": "10.0.0.2", 00:37:35.403 "adrfam": "ipv4", 00:37:35.403 "trsvcid": "4420", 00:37:35.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:35.403 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:35.403 "hdgst": false, 00:37:35.403 "ddgst": false 00:37:35.403 }, 00:37:35.403 "method": "bdev_nvme_attach_controller" 00:37:35.403 }' 00:37:35.663 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:35.663 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:35.663 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:37:35.663 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:35.663 06:29:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:35.951 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:35.951 ... 00:37:35.951 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:35.951 ... 00:37:35.951 fio-3.35 00:37:35.951 Starting 4 threads 00:37:35.951 EAL: No free 2048 kB hugepages reported on node 1 00:37:42.506 00:37:42.506 filename0: (groupid=0, jobs=1): err= 0: pid=329055: Fri Jul 26 06:29:53 2024 00:37:42.506 read: IOPS=1452, BW=11.3MiB/s (11.9MB/s)(56.8MiB/5002msec) 00:37:42.506 slat (nsec): min=5235, max=80267, avg=25404.18, stdev=11084.87 00:37:42.506 clat (usec): min=1211, max=10099, avg=5415.56, stdev=504.96 00:37:42.506 lat (usec): min=1230, max=10137, avg=5440.96, stdev=504.35 00:37:42.506 clat percentiles (usec): 00:37:42.506 | 1.00th=[ 4686], 5.00th=[ 5014], 10.00th=[ 5080], 20.00th=[ 5211], 00:37:42.506 | 30.00th=[ 5276], 40.00th=[ 5342], 50.00th=[ 5342], 60.00th=[ 5407], 00:37:42.506 | 70.00th=[ 5473], 80.00th=[ 5604], 90.00th=[ 5735], 95.00th=[ 5866], 00:37:42.506 | 99.00th=[ 7635], 99.50th=[ 9110], 99.90th=[ 9765], 99.95th=[ 9765], 00:37:42.506 | 99.99th=[10159] 00:37:42.506 bw ( KiB/s): min=10800, max=11904, per=24.87%, avg=11603.56, stdev=385.89, samples=9 00:37:42.506 iops : min= 1350, max= 1488, avg=1450.44, stdev=48.24, samples=9 00:37:42.506 lat (msec) : 2=0.11%, 4=0.41%, 10=99.44%, 20=0.04% 00:37:42.506 cpu : usr=93.40%, sys=5.78%, ctx=17, majf=0, minf=1637 00:37:42.506 IO depths : 1=0.1%, 2=19.4%, 4=54.6%, 8=26.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:42.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:42.506 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:42.506 issued rwts: total=7264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:42.506 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:42.506 filename0: (groupid=0, jobs=1): err= 0: pid=329056: Fri Jul 26 06:29:53 2024 00:37:42.506 read: IOPS=1456, BW=11.4MiB/s (11.9MB/s)(56.9MiB/5002msec) 00:37:42.506 slat (nsec): min=4749, max=80109, avg=24978.96, stdev=11035.96 00:37:42.506 clat (usec): min=965, max=10647, avg=5400.07, stdev=605.09 00:37:42.506 lat (usec): min=984, max=10664, avg=5425.05, stdev=605.06 00:37:42.506 clat percentiles (usec): 00:37:42.506 | 1.00th=[ 3326], 5.00th=[ 5014], 10.00th=[ 5080], 20.00th=[ 5211], 00:37:42.506 | 30.00th=[ 5276], 40.00th=[ 5276], 50.00th=[ 5342], 60.00th=[ 5407], 00:37:42.506 | 70.00th=[ 5473], 80.00th=[ 5604], 90.00th=[ 5735], 95.00th=[ 5866], 00:37:42.506 | 99.00th=[ 8160], 99.50th=[ 8979], 99.90th=[ 9896], 99.95th=[10028], 00:37:42.506 | 99.99th=[10683] 00:37:42.506 bw ( KiB/s): min=10976, max=11904, per=24.96%, avg=11648.60, stdev=333.10, samples=10 00:37:42.506 iops : min= 1372, max= 1488, avg=1456.00, stdev=41.70, samples=10 00:37:42.506 lat (usec) : 1000=0.01% 00:37:42.506 lat (msec) : 2=0.47%, 4=0.84%, 10=98.63%, 20=0.05% 00:37:42.506 cpu : usr=92.30%, sys=6.82%, ctx=9, majf=0, minf=1638 00:37:42.506 IO depths : 1=0.1%, 2=16.7%, 4=56.8%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:42.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:42.506 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:42.506 issued rwts: total=7287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:42.506 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:42.506 filename1: (groupid=0, jobs=1): err= 0: pid=329057: Fri Jul 26 06:29:53 2024 00:37:42.506 read: IOPS=1459, BW=11.4MiB/s (12.0MB/s)(57.0MiB/5001msec) 00:37:42.506 slat (nsec): min=5629, max=84176, avg=25693.35, stdev=11193.03 00:37:42.506 clat (usec): min=1002, max=12143, avg=5386.97, stdev=555.18 00:37:42.506 lat (usec): min=1022, max=12160, avg=5412.66, stdev=555.15 00:37:42.506 clat percentiles (usec): 00:37:42.506 | 1.00th=[ 4080], 5.00th=[ 5014], 10.00th=[ 5080], 20.00th=[ 5211], 00:37:42.506 | 30.00th=[ 5276], 40.00th=[ 5276], 50.00th=[ 5342], 60.00th=[ 5407], 00:37:42.506 | 70.00th=[ 5473], 80.00th=[ 5538], 90.00th=[ 5669], 95.00th=[ 5800], 00:37:42.506 | 99.00th=[ 7832], 99.50th=[ 8717], 99.90th=[ 9503], 99.95th=[ 9765], 00:37:42.506 | 99.99th=[12125] 00:37:42.506 bw ( KiB/s): min=11312, max=11904, per=25.00%, avg=11665.78, stdev=253.42, samples=9 00:37:42.506 iops : min= 1414, max= 1488, avg=1458.22, stdev=31.68, samples=9 00:37:42.506 lat (msec) : 2=0.41%, 4=0.56%, 10=99.01%, 20=0.01% 00:37:42.506 cpu : usr=92.82%, sys=6.26%, ctx=17, majf=0, minf=1634 00:37:42.507 IO depths : 1=0.1%, 2=17.7%, 4=56.0%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:42.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:42.507 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:42.507 issued rwts: total=7299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:42.507 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:42.507 filename1: (groupid=0, jobs=1): err= 0: pid=329058: Fri Jul 26 06:29:53 2024 00:37:42.507 read: IOPS=1466, BW=11.5MiB/s (12.0MB/s)(57.3MiB/5004msec) 00:37:42.507 slat (nsec): min=5146, max=68209, avg=24947.52, stdev=8362.33 00:37:42.507 clat (usec): min=1056, max=9838, avg=5369.46, stdev=469.37 00:37:42.507 lat (usec): min=1090, max=9862, avg=5394.40, stdev=469.57 00:37:42.507 clat percentiles (usec): 00:37:42.507 | 1.00th=[ 3916], 5.00th=[ 5014], 10.00th=[ 5080], 20.00th=[ 5211], 00:37:42.507 | 30.00th=[ 5276], 40.00th=[ 5276], 50.00th=[ 5342], 60.00th=[ 5407], 00:37:42.507 | 70.00th=[ 5473], 80.00th=[ 5538], 90.00th=[ 5669], 95.00th=[ 5800], 00:37:42.507 | 99.00th=[ 6783], 99.50th=[ 8029], 99.90th=[ 9241], 99.95th=[ 9634], 00:37:42.507 | 99.99th=[ 9896] 00:37:42.507 bw ( KiB/s): min=11360, max=11920, per=25.13%, avg=11724.80, stdev=221.03, samples=10 00:37:42.507 iops : min= 1420, max= 1490, avg=1465.60, stdev=27.63, samples=10 00:37:42.507 lat (msec) : 2=0.23%, 4=0.86%, 10=98.91% 00:37:42.507 cpu : usr=92.00%, sys=6.28%, ctx=190, majf=0, minf=1637 00:37:42.507 IO depths : 1=0.1%, 2=17.1%, 4=56.5%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:42.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:42.507 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:42.507 issued rwts: total=7336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:42.507 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:42.507 00:37:42.507 Run status group 0 (all jobs): 00:37:42.507 READ: bw=45.6MiB/s (47.8MB/s), 11.3MiB/s-11.5MiB/s (11.9MB/s-12.0MB/s), io=228MiB (239MB), run=5001-5004msec 00:37:43.073 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:37:43.073 ----------------------------------------------------- 00:37:43.073 Suppressions used: 00:37:43.073 count bytes template 00:37:43.073 6 52 /usr/src/fio/parse.c 00:37:43.073 1 8 libtcmalloc_minimal.so 00:37:43.073 1 904 libcrypto.so 00:37:43.073 ----------------------------------------------------- 00:37:43.073 00:37:43.073 06:29:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:43.073 06:29:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:43.073 06:29:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:43.073 06:29:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:43.073 06:29:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:43.073 06:29:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:43.073 06:29:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.073 06:29:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:43.073 06:29:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.073 06:29:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:43.073 06:29:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.073 06:29:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:43.073 06:29:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.073 06:29:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:43.073 06:29:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:43.073 06:29:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:43.073 06:29:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:43.073 06:29:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.073 06:29:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:43.073 06:29:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.073 06:29:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:43.073 06:29:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.073 06:29:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:43.073 06:29:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.073 00:37:43.073 real 0m28.417s 00:37:43.073 user 4m34.853s 00:37:43.073 sys 0m8.972s 00:37:43.073 06:29:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:43.073 06:29:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:43.073 ************************************ 00:37:43.073 END TEST fio_dif_rand_params 00:37:43.073 ************************************ 00:37:43.332 06:29:54 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:43.332 06:29:54 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:43.332 06:29:54 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:43.332 06:29:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:43.332 ************************************ 00:37:43.332 START TEST fio_dif_digest 00:37:43.332 ************************************ 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:43.332 bdev_null0 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:43.332 [2024-07-26 06:29:54.471491] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:43.332 { 00:37:43.332 "params": { 00:37:43.332 "name": "Nvme$subsystem", 00:37:43.332 "trtype": "$TEST_TRANSPORT", 00:37:43.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:43.332 "adrfam": "ipv4", 00:37:43.332 "trsvcid": "$NVMF_PORT", 00:37:43.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:43.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:43.332 "hdgst": ${hdgst:-false}, 00:37:43.332 "ddgst": ${ddgst:-false} 00:37:43.332 }, 00:37:43.332 "method": "bdev_nvme_attach_controller" 00:37:43.332 } 00:37:43.332 EOF 00:37:43.332 )") 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:37:43.332 06:29:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:43.332 "params": { 00:37:43.332 "name": "Nvme0", 00:37:43.332 "trtype": "tcp", 00:37:43.332 "traddr": "10.0.0.2", 00:37:43.332 "adrfam": "ipv4", 00:37:43.332 "trsvcid": "4420", 00:37:43.332 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:43.332 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:43.333 "hdgst": true, 00:37:43.333 "ddgst": true 00:37:43.333 }, 00:37:43.333 "method": "bdev_nvme_attach_controller" 00:37:43.333 }' 00:37:43.333 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:43.333 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:43.333 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # break 00:37:43.333 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:43.333 06:29:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:43.591 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:43.591 ... 00:37:43.591 fio-3.35 00:37:43.591 Starting 3 threads 00:37:43.591 EAL: No free 2048 kB hugepages reported on node 1 00:37:55.785 00:37:55.785 filename0: (groupid=0, jobs=1): err= 0: pid=329932: Fri Jul 26 06:30:05 2024 00:37:55.785 read: IOPS=165, BW=20.7MiB/s (21.7MB/s)(208MiB/10048msec) 00:37:55.785 slat (nsec): min=7689, max=49263, avg=19769.24, stdev=4485.64 00:37:55.785 clat (usec): min=13557, max=54773, avg=18107.13, stdev=1786.98 00:37:55.785 lat (usec): min=13575, max=54791, avg=18126.90, stdev=1786.81 00:37:55.785 clat percentiles (usec): 00:37:55.785 | 1.00th=[15008], 5.00th=[15926], 10.00th=[16450], 20.00th=[16909], 00:37:55.785 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17957], 60.00th=[18220], 00:37:55.785 | 70.00th=[18744], 80.00th=[19268], 90.00th=[19792], 95.00th=[20317], 00:37:55.785 | 99.00th=[21890], 99.50th=[22152], 99.90th=[47973], 99.95th=[54789], 00:37:55.785 | 99.99th=[54789] 00:37:55.785 bw ( KiB/s): min=20224, max=22016, per=33.09%, avg=21222.40, stdev=524.64, samples=20 00:37:55.785 iops : min= 158, max= 172, avg=165.80, stdev= 4.10, samples=20 00:37:55.785 lat (msec) : 20=92.59%, 50=7.35%, 100=0.06% 00:37:55.785 cpu : usr=93.56%, sys=5.91%, ctx=25, majf=0, minf=1634 00:37:55.785 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:55.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.785 issued rwts: total=1660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:55.785 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:55.785 filename0: (groupid=0, jobs=1): err= 0: pid=329933: Fri Jul 26 06:30:05 2024 00:37:55.785 read: IOPS=174, BW=21.8MiB/s (22.9MB/s)(219MiB/10050msec) 00:37:55.785 slat (nsec): min=7515, max=61624, avg=23871.48, stdev=6170.79 00:37:55.785 clat (usec): min=12436, max=60454, avg=17123.75, stdev=1788.59 00:37:55.785 lat (usec): min=12458, max=60479, avg=17147.62, stdev=1788.27 00:37:55.785 clat percentiles (usec): 00:37:55.785 | 1.00th=[14222], 5.00th=[15008], 10.00th=[15533], 20.00th=[16057], 00:37:55.785 | 30.00th=[16450], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:37:55.785 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18744], 95.00th=[19006], 00:37:55.785 | 99.00th=[20055], 99.50th=[20579], 99.90th=[49546], 99.95th=[60556], 00:37:55.785 | 99.99th=[60556] 00:37:55.785 bw ( KiB/s): min=21504, max=23808, per=34.99%, avg=22438.40, stdev=588.92, samples=20 00:37:55.785 iops : min= 168, max= 186, avg=175.30, stdev= 4.60, samples=20 00:37:55.785 lat (msec) : 20=98.52%, 50=1.42%, 100=0.06% 00:37:55.785 cpu : usr=91.96%, sys=6.79%, ctx=211, majf=0, minf=1637 00:37:55.785 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:55.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.785 issued rwts: total=1755,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:55.785 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:55.785 filename0: (groupid=0, jobs=1): err= 0: pid=329934: Fri Jul 26 06:30:05 2024 00:37:55.785 read: IOPS=161, BW=20.2MiB/s (21.1MB/s)(203MiB/10048msec) 00:37:55.785 slat (nsec): min=8169, max=43488, avg=20119.82, stdev=4008.94 00:37:55.785 clat (usec): min=13033, max=57212, avg=18554.96, stdev=1891.33 00:37:55.785 lat (usec): min=13051, max=57230, avg=18575.08, stdev=1891.05 00:37:55.785 clat percentiles (usec): 00:37:55.785 | 1.00th=[15270], 5.00th=[16450], 10.00th=[16909], 20.00th=[17433], 00:37:55.785 | 30.00th=[17695], 40.00th=[18220], 50.00th=[18482], 60.00th=[18744], 00:37:55.785 | 70.00th=[19268], 80.00th=[19530], 90.00th=[20055], 95.00th=[20579], 00:37:55.785 | 99.00th=[21627], 99.50th=[22152], 99.90th=[52167], 99.95th=[57410], 00:37:55.785 | 99.99th=[57410] 00:37:55.785 bw ( KiB/s): min=19968, max=21760, per=32.30%, avg=20712.40, stdev=547.54, samples=20 00:37:55.785 iops : min= 156, max= 170, avg=161.80, stdev= 4.30, samples=20 00:37:55.785 lat (msec) : 20=87.16%, 50=12.72%, 100=0.12% 00:37:55.785 cpu : usr=93.75%, sys=5.71%, ctx=19, majf=0, minf=1636 00:37:55.785 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:55.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.785 issued rwts: total=1620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:55.785 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:55.785 00:37:55.785 Run status group 0 (all jobs): 00:37:55.785 READ: bw=62.6MiB/s (65.7MB/s), 20.2MiB/s-21.8MiB/s (21.1MB/s-22.9MB/s), io=629MiB (660MB), run=10048-10050msec 00:37:55.785 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:37:55.785 ----------------------------------------------------- 00:37:55.785 Suppressions used: 00:37:55.785 count bytes template 00:37:55.785 5 44 /usr/src/fio/parse.c 00:37:55.785 1 8 libtcmalloc_minimal.so 00:37:55.785 1 904 libcrypto.so 00:37:55.785 ----------------------------------------------------- 00:37:55.785 00:37:55.785 06:30:06 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:55.785 06:30:06 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:55.785 06:30:06 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:55.785 06:30:06 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:55.785 06:30:06 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:55.785 06:30:06 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:55.785 06:30:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:55.785 06:30:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:55.785 06:30:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:55.785 06:30:06 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:55.785 06:30:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:55.785 06:30:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:55.785 06:30:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:55.785 00:37:55.785 real 0m12.217s 00:37:55.785 user 0m30.188s 00:37:55.785 sys 0m2.270s 00:37:55.785 06:30:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:55.785 06:30:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:55.785 ************************************ 00:37:55.785 END TEST fio_dif_digest 00:37:55.785 ************************************ 00:37:55.785 06:30:06 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:55.785 06:30:06 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:55.785 06:30:06 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:55.785 06:30:06 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:37:55.785 06:30:06 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:55.785 06:30:06 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:37:55.785 06:30:06 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:55.785 06:30:06 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:55.785 rmmod nvme_tcp 00:37:55.785 rmmod nvme_fabrics 00:37:55.785 rmmod nvme_keyring 00:37:55.785 06:30:06 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:55.785 06:30:06 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:37:55.785 06:30:06 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:37:55.785 06:30:06 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 323144 ']' 00:37:55.785 06:30:06 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 323144 00:37:55.785 06:30:06 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 323144 ']' 00:37:55.785 06:30:06 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 323144 00:37:55.785 06:30:06 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:37:55.785 06:30:06 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:55.785 06:30:06 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 323144 00:37:55.786 06:30:06 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:55.786 06:30:06 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:55.786 06:30:06 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 323144' 00:37:55.786 killing process with pid 323144 00:37:55.786 06:30:06 nvmf_dif -- common/autotest_common.sh@969 -- # kill 323144 00:37:55.786 06:30:06 nvmf_dif -- common/autotest_common.sh@974 -- # wait 323144 00:37:56.720 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:37:56.979 06:30:08 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:37:56.979 06:30:08 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:57.914 Waiting for block devices as requested 00:37:57.914 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:37:57.914 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:58.178 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:58.178 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:58.178 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:58.178 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:58.441 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:58.441 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:58.441 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:58.441 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:58.698 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:58.698 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:58.698 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:58.698 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:58.957 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:58.957 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:58.957 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:59.216 06:30:10 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:59.216 06:30:10 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:59.216 06:30:10 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:59.216 06:30:10 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:59.216 06:30:10 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:59.216 06:30:10 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:59.216 06:30:10 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:01.120 06:30:12 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:01.120 00:38:01.120 real 1m15.906s 00:38:01.120 user 6m43.998s 00:38:01.120 sys 0m20.407s 00:38:01.120 06:30:12 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:01.120 06:30:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:01.120 ************************************ 00:38:01.120 END TEST nvmf_dif 00:38:01.120 ************************************ 00:38:01.120 06:30:12 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:01.120 06:30:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:01.120 06:30:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:01.120 06:30:12 -- common/autotest_common.sh@10 -- # set +x 00:38:01.378 ************************************ 00:38:01.378 START TEST nvmf_abort_qd_sizes 00:38:01.378 ************************************ 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:01.378 * Looking for test storage... 00:38:01.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:01.378 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:01.379 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:01.379 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:01.379 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:01.379 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:01.379 06:30:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:38:01.379 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:01.379 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:01.379 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:01.379 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:01.379 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:01.379 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:01.379 06:30:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:01.379 06:30:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:01.379 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:01.379 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:01.379 06:30:12 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:38:01.379 06:30:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:03.279 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:03.279 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:03.280 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:03.280 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:03.280 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:03.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:03.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:38:03.280 00:38:03.280 --- 10.0.0.2 ping statistics --- 00:38:03.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:03.280 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:03.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:03.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:38:03.280 00:38:03.280 --- 10.0.0.1 ping statistics --- 00:38:03.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:03.280 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:38:03.280 06:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:04.696 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:38:04.696 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:38:04.696 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:38:04.696 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:38:04.696 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:38:04.696 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:38:04.696 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:38:04.696 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:38:04.696 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:38:04.696 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:38:04.696 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:38:04.696 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:38:04.696 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:38:04.696 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:38:04.696 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:38:04.696 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:38:05.264 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:38:05.524 06:30:16 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:05.524 06:30:16 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:05.524 06:30:16 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:05.524 06:30:16 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:05.524 06:30:16 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:05.524 06:30:16 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:05.524 06:30:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:38:05.524 06:30:16 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:05.524 06:30:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:05.524 06:30:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:05.524 06:30:16 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=335590 00:38:05.524 06:30:16 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:38:05.524 06:30:16 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 335590 00:38:05.524 06:30:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 335590 ']' 00:38:05.524 06:30:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:05.524 06:30:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:05.524 06:30:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:05.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:05.524 06:30:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:05.524 06:30:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:05.784 [2024-07-26 06:30:16.871228] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:05.784 [2024-07-26 06:30:16.871371] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:05.784 EAL: No free 2048 kB hugepages reported on node 1 00:38:05.784 [2024-07-26 06:30:17.008621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:06.044 [2024-07-26 06:30:17.267590] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:06.044 [2024-07-26 06:30:17.267670] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:06.044 [2024-07-26 06:30:17.267698] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:06.044 [2024-07-26 06:30:17.267718] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:06.044 [2024-07-26 06:30:17.267740] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:06.044 [2024-07-26 06:30:17.268146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:06.044 [2024-07-26 06:30:17.268180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:06.044 [2024-07-26 06:30:17.268244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:06.044 [2024-07-26 06:30:17.268254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:38:06.611 06:30:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:06.611 06:30:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:38:06.611 06:30:17 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:06.611 06:30:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:06.611 06:30:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:06.611 06:30:17 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:06.611 06:30:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:38:06.611 06:30:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:38:06.611 06:30:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:38:06.611 06:30:17 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:38:06.611 06:30:17 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:38:06.611 06:30:17 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:38:06.611 06:30:17 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:38:06.611 06:30:17 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:38:06.611 06:30:17 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:38:06.611 06:30:17 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:38:06.611 06:30:17 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:38:06.611 06:30:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:38:06.611 06:30:17 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:38:06.611 06:30:17 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:38:06.611 06:30:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:38:06.611 06:30:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:38:06.611 06:30:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:38:06.611 06:30:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:06.611 06:30:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:06.611 06:30:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:06.611 ************************************ 00:38:06.611 START TEST spdk_target_abort 00:38:06.611 ************************************ 00:38:06.611 06:30:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:38:06.611 06:30:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:38:06.611 06:30:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:38:06.611 06:30:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.611 06:30:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:09.895 spdk_targetn1 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:09.895 [2024-07-26 06:30:20.718532] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:09.895 [2024-07-26 06:30:20.764021] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:09.895 06:30:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:09.896 EAL: No free 2048 kB hugepages reported on node 1 00:38:13.180 Initializing NVMe Controllers 00:38:13.180 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:13.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:13.180 Initialization complete. Launching workers. 00:38:13.180 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9019, failed: 0 00:38:13.180 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1208, failed to submit 7811 00:38:13.180 success 728, unsuccess 480, failed 0 00:38:13.180 06:30:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:13.180 06:30:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:13.180 EAL: No free 2048 kB hugepages reported on node 1 00:38:16.462 Initializing NVMe Controllers 00:38:16.462 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:16.462 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:16.462 Initialization complete. Launching workers. 00:38:16.462 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8580, failed: 0 00:38:16.462 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1238, failed to submit 7342 00:38:16.462 success 348, unsuccess 890, failed 0 00:38:16.462 06:30:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:16.462 06:30:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:16.462 EAL: No free 2048 kB hugepages reported on node 1 00:38:19.742 Initializing NVMe Controllers 00:38:19.742 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:19.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:19.742 Initialization complete. Launching workers. 00:38:19.742 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27167, failed: 0 00:38:19.742 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2729, failed to submit 24438 00:38:19.743 success 215, unsuccess 2514, failed 0 00:38:19.743 06:30:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:19.743 06:30:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.743 06:30:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:19.743 06:30:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.743 06:30:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:19.743 06:30:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.743 06:30:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:21.116 06:30:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:21.116 06:30:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 335590 00:38:21.116 06:30:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 335590 ']' 00:38:21.116 06:30:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 335590 00:38:21.116 06:30:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:38:21.116 06:30:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:21.116 06:30:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 335590 00:38:21.116 06:30:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:21.116 06:30:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:21.116 06:30:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 335590' 00:38:21.116 killing process with pid 335590 00:38:21.116 06:30:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 335590 00:38:21.116 06:30:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 335590 00:38:22.055 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:38:22.055 00:38:22.055 real 0m15.412s 00:38:22.055 user 0m59.346s 00:38:22.055 sys 0m2.596s 00:38:22.055 06:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:22.055 06:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:22.055 ************************************ 00:38:22.055 END TEST spdk_target_abort 00:38:22.055 ************************************ 00:38:22.055 06:30:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:22.055 06:30:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:22.055 06:30:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:22.055 06:30:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:22.055 ************************************ 00:38:22.055 START TEST kernel_target_abort 00:38:22.055 ************************************ 00:38:22.055 06:30:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:38:22.055 06:30:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:22.055 06:30:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:38:22.055 06:30:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:22.055 06:30:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:22.055 06:30:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:22.055 06:30:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:22.055 06:30:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:22.055 06:30:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:22.055 06:30:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:22.055 06:30:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:22.055 06:30:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:22.055 06:30:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:22.055 06:30:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:22.055 06:30:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:38:22.055 06:30:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:22.055 06:30:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:22.055 06:30:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:22.055 06:30:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:38:22.055 06:30:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:38:22.055 06:30:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:38:22.055 06:30:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:22.055 06:30:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:23.429 Waiting for block devices as requested 00:38:23.429 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:38:23.429 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:38:23.429 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:38:23.688 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:38:23.688 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:38:23.688 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:38:23.688 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:38:23.947 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:38:23.947 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:38:23.947 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:38:23.947 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:38:24.206 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:38:24.206 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:38:24.206 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:38:24.206 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:38:24.465 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:38:24.465 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:38:24.723 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:38:24.723 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:24.723 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:38:24.723 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:38:24.723 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:24.723 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:38:24.723 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:38:24.723 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:38:24.723 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:24.982 No valid GPT data, bailing 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:38:24.982 00:38:24.982 Discovery Log Number of Records 2, Generation counter 2 00:38:24.982 =====Discovery Log Entry 0====== 00:38:24.982 trtype: tcp 00:38:24.982 adrfam: ipv4 00:38:24.982 subtype: current discovery subsystem 00:38:24.982 treq: not specified, sq flow control disable supported 00:38:24.982 portid: 1 00:38:24.982 trsvcid: 4420 00:38:24.982 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:24.982 traddr: 10.0.0.1 00:38:24.982 eflags: none 00:38:24.982 sectype: none 00:38:24.982 =====Discovery Log Entry 1====== 00:38:24.982 trtype: tcp 00:38:24.982 adrfam: ipv4 00:38:24.982 subtype: nvme subsystem 00:38:24.982 treq: not specified, sq flow control disable supported 00:38:24.982 portid: 1 00:38:24.982 trsvcid: 4420 00:38:24.982 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:24.982 traddr: 10.0.0.1 00:38:24.982 eflags: none 00:38:24.982 sectype: none 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:24.982 06:30:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:24.982 EAL: No free 2048 kB hugepages reported on node 1 00:38:28.305 Initializing NVMe Controllers 00:38:28.305 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:28.305 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:28.305 Initialization complete. Launching workers. 00:38:28.305 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29561, failed: 0 00:38:28.305 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29561, failed to submit 0 00:38:28.305 success 0, unsuccess 29561, failed 0 00:38:28.305 06:30:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:28.305 06:30:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:28.305 EAL: No free 2048 kB hugepages reported on node 1 00:38:31.589 Initializing NVMe Controllers 00:38:31.589 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:31.589 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:31.589 Initialization complete. Launching workers. 00:38:31.589 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56782, failed: 0 00:38:31.589 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14298, failed to submit 42484 00:38:31.589 success 0, unsuccess 14298, failed 0 00:38:31.589 06:30:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:31.589 06:30:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:31.589 EAL: No free 2048 kB hugepages reported on node 1 00:38:34.871 Initializing NVMe Controllers 00:38:34.871 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:34.871 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:34.871 Initialization complete. Launching workers. 00:38:34.871 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55827, failed: 0 00:38:34.871 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13930, failed to submit 41897 00:38:34.871 success 0, unsuccess 13930, failed 0 00:38:34.871 06:30:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:34.871 06:30:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:34.871 06:30:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:38:34.871 06:30:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:34.871 06:30:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:34.871 06:30:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:34.871 06:30:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:34.871 06:30:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:38:34.871 06:30:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:38:34.871 06:30:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:35.809 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:38:35.809 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:38:35.809 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:38:35.809 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:38:35.809 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:38:35.809 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:38:35.809 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:38:35.809 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:38:35.809 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:38:35.809 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:38:35.809 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:38:35.809 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:38:35.809 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:38:35.809 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:38:35.809 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:38:35.809 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:38:36.746 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:38:37.005 00:38:37.005 real 0m14.809s 00:38:37.005 user 0m6.157s 00:38:37.005 sys 0m3.675s 00:38:37.005 06:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:37.005 06:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:37.005 ************************************ 00:38:37.005 END TEST kernel_target_abort 00:38:37.005 ************************************ 00:38:37.005 06:30:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:37.005 06:30:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:37.005 06:30:48 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:37.005 06:30:48 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:38:37.005 06:30:48 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:37.005 06:30:48 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:38:37.005 06:30:48 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:37.005 06:30:48 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:37.005 rmmod nvme_tcp 00:38:37.005 rmmod nvme_fabrics 00:38:37.005 rmmod nvme_keyring 00:38:37.005 06:30:48 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:37.005 06:30:48 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:38:37.005 06:30:48 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:38:37.005 06:30:48 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 335590 ']' 00:38:37.005 06:30:48 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 335590 00:38:37.005 06:30:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 335590 ']' 00:38:37.005 06:30:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 335590 00:38:37.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (335590) - No such process 00:38:37.005 06:30:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 335590 is not found' 00:38:37.005 Process with pid 335590 is not found 00:38:37.006 06:30:48 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:38:37.006 06:30:48 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:37.943 Waiting for block devices as requested 00:38:37.943 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:38:38.201 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:38:38.201 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:38:38.459 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:38:38.459 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:38:38.459 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:38:38.459 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:38:38.459 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:38:38.717 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:38:38.717 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:38:38.717 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:38:38.717 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:38:38.975 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:38:38.975 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:38:38.975 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:38:38.975 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:38:38.975 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:38:39.235 06:30:50 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:39.235 06:30:50 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:39.235 06:30:50 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:39.235 06:30:50 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:39.235 06:30:50 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:39.235 06:30:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:39.235 06:30:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.143 06:30:52 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:41.143 00:38:41.143 real 0m39.984s 00:38:41.143 user 1m7.768s 00:38:41.143 sys 0m9.443s 00:38:41.143 06:30:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:41.143 06:30:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:41.143 ************************************ 00:38:41.143 END TEST nvmf_abort_qd_sizes 00:38:41.143 ************************************ 00:38:41.401 06:30:52 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:41.401 06:30:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:41.401 06:30:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:41.401 06:30:52 -- common/autotest_common.sh@10 -- # set +x 00:38:41.401 ************************************ 00:38:41.401 START TEST keyring_file 00:38:41.401 ************************************ 00:38:41.401 06:30:52 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:41.401 * Looking for test storage... 00:38:41.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:41.401 06:30:52 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:41.401 06:30:52 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:41.401 06:30:52 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:41.401 06:30:52 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:41.401 06:30:52 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:41.401 06:30:52 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:41.401 06:30:52 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:41.401 06:30:52 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:41.401 06:30:52 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:41.401 06:30:52 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:41.401 06:30:52 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:41.401 06:30:52 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:41.401 06:30:52 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:41.401 06:30:52 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:41.401 06:30:52 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:41.401 06:30:52 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:41.401 06:30:52 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:41.402 06:30:52 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:41.402 06:30:52 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:41.402 06:30:52 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:41.402 06:30:52 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:41.402 06:30:52 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:41.402 06:30:52 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:41.402 06:30:52 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.402 06:30:52 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.402 06:30:52 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.402 06:30:52 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:41.402 06:30:52 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.402 06:30:52 keyring_file -- nvmf/common.sh@47 -- # : 0 00:38:41.402 06:30:52 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:41.402 06:30:52 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:41.402 06:30:52 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:41.402 06:30:52 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:41.402 06:30:52 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:41.402 06:30:52 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:41.402 06:30:52 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:41.402 06:30:52 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:41.402 06:30:52 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:41.402 06:30:52 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:41.402 06:30:52 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:41.402 06:30:52 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:41.402 06:30:52 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:41.402 06:30:52 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:41.402 06:30:52 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:41.402 06:30:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:41.402 06:30:52 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:41.402 06:30:52 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:41.402 06:30:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:41.402 06:30:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:41.402 06:30:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uHQMLMtZSx 00:38:41.402 06:30:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:41.402 06:30:52 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:41.402 06:30:52 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:38:41.402 06:30:52 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:38:41.402 06:30:52 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:38:41.402 06:30:52 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:38:41.402 06:30:52 keyring_file -- nvmf/common.sh@705 -- # python - 00:38:41.402 06:30:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uHQMLMtZSx 00:38:41.402 06:30:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uHQMLMtZSx 00:38:41.402 06:30:52 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.uHQMLMtZSx 00:38:41.402 06:30:52 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:41.402 06:30:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:41.402 06:30:52 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:41.402 06:30:52 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:41.402 06:30:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:41.402 06:30:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:41.402 06:30:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.bhtxOpkaTE 00:38:41.402 06:30:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:41.402 06:30:52 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:41.402 06:30:52 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:38:41.402 06:30:52 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:38:41.402 06:30:52 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:38:41.402 06:30:52 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:38:41.402 06:30:52 keyring_file -- nvmf/common.sh@705 -- # python - 00:38:41.402 06:30:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.bhtxOpkaTE 00:38:41.402 06:30:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.bhtxOpkaTE 00:38:41.402 06:30:52 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.bhtxOpkaTE 00:38:41.402 06:30:52 keyring_file -- keyring/file.sh@30 -- # tgtpid=341813 00:38:41.402 06:30:52 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:41.402 06:30:52 keyring_file -- keyring/file.sh@32 -- # waitforlisten 341813 00:38:41.402 06:30:52 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 341813 ']' 00:38:41.402 06:30:52 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:41.402 06:30:52 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:41.402 06:30:52 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:41.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:41.402 06:30:52 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:41.402 06:30:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:41.662 [2024-07-26 06:30:52.754382] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:41.662 [2024-07-26 06:30:52.754539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid341813 ] 00:38:41.662 EAL: No free 2048 kB hugepages reported on node 1 00:38:41.662 [2024-07-26 06:30:52.877067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:41.922 [2024-07-26 06:30:53.131293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:42.861 06:30:53 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:42.861 06:30:53 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:38:42.861 06:30:53 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:42.861 06:30:53 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:42.861 06:30:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:42.861 [2024-07-26 06:30:54.003191] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:42.861 null0 00:38:42.861 [2024-07-26 06:30:54.035230] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:42.861 [2024-07-26 06:30:54.035792] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:42.861 [2024-07-26 06:30:54.043250] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:38:42.861 06:30:54 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:42.861 06:30:54 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:42.861 06:30:54 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:42.861 06:30:54 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:42.861 06:30:54 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:38:42.861 06:30:54 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:42.861 06:30:54 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:38:42.861 06:30:54 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:42.861 06:30:54 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:42.861 06:30:54 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:42.861 06:30:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:42.861 [2024-07-26 06:30:54.051252] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:42.861 request: 00:38:42.861 { 00:38:42.861 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:42.861 "secure_channel": false, 00:38:42.861 "listen_address": { 00:38:42.861 "trtype": "tcp", 00:38:42.861 "traddr": "127.0.0.1", 00:38:42.861 "trsvcid": "4420" 00:38:42.861 }, 00:38:42.861 "method": "nvmf_subsystem_add_listener", 00:38:42.861 "req_id": 1 00:38:42.861 } 00:38:42.861 Got JSON-RPC error response 00:38:42.861 response: 00:38:42.861 { 00:38:42.861 "code": -32602, 00:38:42.861 "message": "Invalid parameters" 00:38:42.861 } 00:38:42.861 06:30:54 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:42.861 06:30:54 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:42.861 06:30:54 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:42.861 06:30:54 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:42.861 06:30:54 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:42.861 06:30:54 keyring_file -- keyring/file.sh@46 -- # bperfpid=341956 00:38:42.861 06:30:54 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:42.861 06:30:54 keyring_file -- keyring/file.sh@48 -- # waitforlisten 341956 /var/tmp/bperf.sock 00:38:42.861 06:30:54 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 341956 ']' 00:38:42.861 06:30:54 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:42.861 06:30:54 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:42.861 06:30:54 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:42.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:42.861 06:30:54 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:42.861 06:30:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:42.861 [2024-07-26 06:30:54.135946] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:42.861 [2024-07-26 06:30:54.136118] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid341956 ] 00:38:43.119 EAL: No free 2048 kB hugepages reported on node 1 00:38:43.119 [2024-07-26 06:30:54.266438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:43.377 [2024-07-26 06:30:54.521918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:43.943 06:30:55 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:43.943 06:30:55 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:38:43.943 06:30:55 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uHQMLMtZSx 00:38:43.943 06:30:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uHQMLMtZSx 00:38:44.203 06:30:55 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.bhtxOpkaTE 00:38:44.203 06:30:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.bhtxOpkaTE 00:38:44.462 06:30:55 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:38:44.462 06:30:55 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:38:44.462 06:30:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:44.462 06:30:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:44.462 06:30:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:44.462 06:30:55 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.uHQMLMtZSx == \/\t\m\p\/\t\m\p\.\u\H\Q\M\L\M\t\Z\S\x ]] 00:38:44.462 06:30:55 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:38:44.462 06:30:55 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:44.462 06:30:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:44.462 06:30:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:44.462 06:30:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:44.720 06:30:56 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.bhtxOpkaTE == \/\t\m\p\/\t\m\p\.\b\h\t\x\O\p\k\a\T\E ]] 00:38:44.720 06:30:56 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:38:44.720 06:30:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:44.720 06:30:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:44.720 06:30:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:44.720 06:30:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:44.720 06:30:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:45.012 06:30:56 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:38:45.012 06:30:56 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:38:45.012 06:30:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:45.012 06:30:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:45.012 06:30:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:45.012 06:30:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:45.012 06:30:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:45.308 06:30:56 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:45.308 06:30:56 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:45.308 06:30:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:45.568 [2024-07-26 06:30:56.773828] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:45.569 nvme0n1 00:38:45.569 06:30:56 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:38:45.569 06:30:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:45.569 06:30:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:45.569 06:30:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:45.569 06:30:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:45.569 06:30:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:45.827 06:30:57 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:38:45.827 06:30:57 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:38:45.827 06:30:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:45.827 06:30:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:45.827 06:30:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:45.827 06:30:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:45.827 06:30:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:46.085 06:30:57 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:38:46.085 06:30:57 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:46.344 Running I/O for 1 seconds... 00:38:47.281 00:38:47.281 Latency(us) 00:38:47.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:47.281 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:47.281 nvme0n1 : 1.03 4634.81 18.10 0.00 0.00 27245.81 11796.48 42913.94 00:38:47.281 =================================================================================================================== 00:38:47.281 Total : 4634.81 18.10 0.00 0.00 27245.81 11796.48 42913.94 00:38:47.281 0 00:38:47.281 06:30:58 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:47.281 06:30:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:47.539 06:30:58 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:38:47.539 06:30:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:47.539 06:30:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:47.539 06:30:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:47.539 06:30:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:47.539 06:30:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:47.796 06:30:59 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:38:47.796 06:30:59 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:38:47.796 06:30:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:47.796 06:30:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:47.796 06:30:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:47.796 06:30:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:47.796 06:30:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:48.055 06:30:59 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:48.055 06:30:59 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:48.055 06:30:59 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:48.055 06:30:59 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:48.055 06:30:59 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:48.055 06:30:59 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:48.055 06:30:59 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:48.055 06:30:59 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:48.055 06:30:59 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:48.055 06:30:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:48.313 [2024-07-26 06:30:59.546014] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:48.313 [2024-07-26 06:30:59.546252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (107): Transport endpoint is not connected 00:38:48.313 [2024-07-26 06:30:59.547226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:38:48.313 [2024-07-26 06:30:59.548223] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:38:48.313 [2024-07-26 06:30:59.548252] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:48.313 [2024-07-26 06:30:59.548272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:38:48.313 request: 00:38:48.313 { 00:38:48.313 "name": "nvme0", 00:38:48.313 "trtype": "tcp", 00:38:48.313 "traddr": "127.0.0.1", 00:38:48.313 "adrfam": "ipv4", 00:38:48.313 "trsvcid": "4420", 00:38:48.313 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:48.313 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:48.313 "prchk_reftag": false, 00:38:48.313 "prchk_guard": false, 00:38:48.313 "hdgst": false, 00:38:48.313 "ddgst": false, 00:38:48.313 "psk": "key1", 00:38:48.313 "method": "bdev_nvme_attach_controller", 00:38:48.313 "req_id": 1 00:38:48.313 } 00:38:48.313 Got JSON-RPC error response 00:38:48.313 response: 00:38:48.313 { 00:38:48.313 "code": -5, 00:38:48.313 "message": "Input/output error" 00:38:48.313 } 00:38:48.313 06:30:59 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:48.313 06:30:59 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:48.313 06:30:59 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:48.313 06:30:59 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:48.313 06:30:59 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:38:48.313 06:30:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:48.313 06:30:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:48.313 06:30:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:48.313 06:30:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:48.313 06:30:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:48.572 06:30:59 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:38:48.572 06:30:59 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:38:48.572 06:30:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:48.572 06:30:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:48.572 06:30:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:48.572 06:30:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:48.572 06:30:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:48.830 06:31:00 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:48.830 06:31:00 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:38:48.830 06:31:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:49.087 06:31:00 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:38:49.088 06:31:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:49.345 06:31:00 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:38:49.345 06:31:00 keyring_file -- keyring/file.sh@77 -- # jq length 00:38:49.345 06:31:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:49.603 06:31:00 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:38:49.603 06:31:00 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.uHQMLMtZSx 00:38:49.603 06:31:00 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.uHQMLMtZSx 00:38:49.603 06:31:00 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:49.603 06:31:00 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.uHQMLMtZSx 00:38:49.603 06:31:00 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:49.603 06:31:00 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:49.603 06:31:00 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:49.603 06:31:00 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:49.603 06:31:00 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uHQMLMtZSx 00:38:49.603 06:31:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uHQMLMtZSx 00:38:49.862 [2024-07-26 06:31:01.045111] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.uHQMLMtZSx': 0100660 00:38:49.862 [2024-07-26 06:31:01.045162] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:49.862 request: 00:38:49.862 { 00:38:49.862 "name": "key0", 00:38:49.862 "path": "/tmp/tmp.uHQMLMtZSx", 00:38:49.862 "method": "keyring_file_add_key", 00:38:49.862 "req_id": 1 00:38:49.862 } 00:38:49.862 Got JSON-RPC error response 00:38:49.862 response: 00:38:49.862 { 00:38:49.862 "code": -1, 00:38:49.862 "message": "Operation not permitted" 00:38:49.862 } 00:38:49.862 06:31:01 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:49.862 06:31:01 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:49.862 06:31:01 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:49.862 06:31:01 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:49.862 06:31:01 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.uHQMLMtZSx 00:38:49.862 06:31:01 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uHQMLMtZSx 00:38:49.862 06:31:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uHQMLMtZSx 00:38:50.120 06:31:01 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.uHQMLMtZSx 00:38:50.120 06:31:01 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:38:50.120 06:31:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:50.120 06:31:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:50.120 06:31:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:50.120 06:31:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:50.120 06:31:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:50.379 06:31:01 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:38:50.379 06:31:01 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:50.379 06:31:01 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:50.379 06:31:01 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:50.379 06:31:01 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:50.379 06:31:01 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:50.379 06:31:01 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:50.379 06:31:01 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:50.379 06:31:01 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:50.379 06:31:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:50.637 [2024-07-26 06:31:01.799304] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.uHQMLMtZSx': No such file or directory 00:38:50.637 [2024-07-26 06:31:01.799353] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:50.637 [2024-07-26 06:31:01.799409] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:50.637 [2024-07-26 06:31:01.799430] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:50.637 [2024-07-26 06:31:01.799452] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:50.637 request: 00:38:50.637 { 00:38:50.637 "name": "nvme0", 00:38:50.637 "trtype": "tcp", 00:38:50.637 "traddr": "127.0.0.1", 00:38:50.637 "adrfam": "ipv4", 00:38:50.637 "trsvcid": "4420", 00:38:50.637 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:50.637 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:50.637 "prchk_reftag": false, 00:38:50.637 "prchk_guard": false, 00:38:50.637 "hdgst": false, 00:38:50.637 "ddgst": false, 00:38:50.637 "psk": "key0", 00:38:50.637 "method": "bdev_nvme_attach_controller", 00:38:50.637 "req_id": 1 00:38:50.637 } 00:38:50.637 Got JSON-RPC error response 00:38:50.637 response: 00:38:50.637 { 00:38:50.637 "code": -19, 00:38:50.637 "message": "No such device" 00:38:50.637 } 00:38:50.637 06:31:01 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:50.637 06:31:01 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:50.637 06:31:01 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:50.637 06:31:01 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:50.637 06:31:01 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:38:50.637 06:31:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:50.895 06:31:02 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:50.895 06:31:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:50.895 06:31:02 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:50.895 06:31:02 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:50.895 06:31:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:50.895 06:31:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:50.895 06:31:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dybqF2JFM1 00:38:50.895 06:31:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:50.896 06:31:02 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:50.896 06:31:02 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:38:50.896 06:31:02 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:38:50.896 06:31:02 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:38:50.896 06:31:02 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:38:50.896 06:31:02 keyring_file -- nvmf/common.sh@705 -- # python - 00:38:50.896 06:31:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dybqF2JFM1 00:38:50.896 06:31:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dybqF2JFM1 00:38:50.896 06:31:02 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.dybqF2JFM1 00:38:50.896 06:31:02 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dybqF2JFM1 00:38:50.896 06:31:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dybqF2JFM1 00:38:51.154 06:31:02 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:51.154 06:31:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:51.411 nvme0n1 00:38:51.411 06:31:02 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:38:51.411 06:31:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:51.411 06:31:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:51.412 06:31:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:51.412 06:31:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:51.412 06:31:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:51.669 06:31:02 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:38:51.669 06:31:02 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:38:51.670 06:31:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:51.927 06:31:03 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:38:51.927 06:31:03 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:38:51.927 06:31:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:51.927 06:31:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:51.927 06:31:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:52.188 06:31:03 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:38:52.188 06:31:03 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:38:52.188 06:31:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:52.188 06:31:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:52.188 06:31:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:52.188 06:31:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:52.188 06:31:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:52.448 06:31:03 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:38:52.448 06:31:03 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:52.448 06:31:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:52.705 06:31:03 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:38:52.705 06:31:03 keyring_file -- keyring/file.sh@104 -- # jq length 00:38:52.705 06:31:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:52.963 06:31:04 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:38:52.963 06:31:04 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dybqF2JFM1 00:38:52.963 06:31:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dybqF2JFM1 00:38:53.221 06:31:04 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.bhtxOpkaTE 00:38:53.221 06:31:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.bhtxOpkaTE 00:38:53.479 06:31:04 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:53.479 06:31:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:53.737 nvme0n1 00:38:53.737 06:31:05 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:38:53.737 06:31:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:54.304 06:31:05 keyring_file -- keyring/file.sh@112 -- # config='{ 00:38:54.304 "subsystems": [ 00:38:54.304 { 00:38:54.304 "subsystem": "keyring", 00:38:54.304 "config": [ 00:38:54.304 { 00:38:54.304 "method": "keyring_file_add_key", 00:38:54.304 "params": { 00:38:54.304 "name": "key0", 00:38:54.304 "path": "/tmp/tmp.dybqF2JFM1" 00:38:54.304 } 00:38:54.304 }, 00:38:54.304 { 00:38:54.304 "method": "keyring_file_add_key", 00:38:54.304 "params": { 00:38:54.304 "name": "key1", 00:38:54.304 "path": "/tmp/tmp.bhtxOpkaTE" 00:38:54.304 } 00:38:54.304 } 00:38:54.304 ] 00:38:54.304 }, 00:38:54.304 { 00:38:54.304 "subsystem": "iobuf", 00:38:54.304 "config": [ 00:38:54.304 { 00:38:54.304 "method": "iobuf_set_options", 00:38:54.304 "params": { 00:38:54.304 "small_pool_count": 8192, 00:38:54.304 "large_pool_count": 1024, 00:38:54.304 "small_bufsize": 8192, 00:38:54.304 "large_bufsize": 135168 00:38:54.304 } 00:38:54.304 } 00:38:54.304 ] 00:38:54.304 }, 00:38:54.304 { 00:38:54.304 "subsystem": "sock", 00:38:54.304 "config": [ 00:38:54.304 { 00:38:54.304 "method": "sock_set_default_impl", 00:38:54.304 "params": { 00:38:54.304 "impl_name": "posix" 00:38:54.304 } 00:38:54.304 }, 00:38:54.304 { 00:38:54.304 "method": "sock_impl_set_options", 00:38:54.304 "params": { 00:38:54.304 "impl_name": "ssl", 00:38:54.304 "recv_buf_size": 4096, 00:38:54.304 "send_buf_size": 4096, 00:38:54.304 "enable_recv_pipe": true, 00:38:54.304 "enable_quickack": false, 00:38:54.304 "enable_placement_id": 0, 00:38:54.304 "enable_zerocopy_send_server": true, 00:38:54.304 "enable_zerocopy_send_client": false, 00:38:54.304 "zerocopy_threshold": 0, 00:38:54.304 "tls_version": 0, 00:38:54.304 "enable_ktls": false 00:38:54.304 } 00:38:54.304 }, 00:38:54.304 { 00:38:54.304 "method": "sock_impl_set_options", 00:38:54.304 "params": { 00:38:54.304 "impl_name": "posix", 00:38:54.304 "recv_buf_size": 2097152, 00:38:54.304 "send_buf_size": 2097152, 00:38:54.304 "enable_recv_pipe": true, 00:38:54.304 "enable_quickack": false, 00:38:54.304 "enable_placement_id": 0, 00:38:54.304 "enable_zerocopy_send_server": true, 00:38:54.304 "enable_zerocopy_send_client": false, 00:38:54.304 "zerocopy_threshold": 0, 00:38:54.304 "tls_version": 0, 00:38:54.304 "enable_ktls": false 00:38:54.304 } 00:38:54.304 } 00:38:54.304 ] 00:38:54.304 }, 00:38:54.304 { 00:38:54.304 "subsystem": "vmd", 00:38:54.304 "config": [] 00:38:54.304 }, 00:38:54.304 { 00:38:54.304 "subsystem": "accel", 00:38:54.304 "config": [ 00:38:54.304 { 00:38:54.304 "method": "accel_set_options", 00:38:54.304 "params": { 00:38:54.304 "small_cache_size": 128, 00:38:54.304 "large_cache_size": 16, 00:38:54.304 "task_count": 2048, 00:38:54.304 "sequence_count": 2048, 00:38:54.304 "buf_count": 2048 00:38:54.304 } 00:38:54.304 } 00:38:54.304 ] 00:38:54.304 }, 00:38:54.304 { 00:38:54.304 "subsystem": "bdev", 00:38:54.304 "config": [ 00:38:54.304 { 00:38:54.304 "method": "bdev_set_options", 00:38:54.304 "params": { 00:38:54.304 "bdev_io_pool_size": 65535, 00:38:54.304 "bdev_io_cache_size": 256, 00:38:54.304 "bdev_auto_examine": true, 00:38:54.304 "iobuf_small_cache_size": 128, 00:38:54.304 "iobuf_large_cache_size": 16 00:38:54.304 } 00:38:54.304 }, 00:38:54.304 { 00:38:54.304 "method": "bdev_raid_set_options", 00:38:54.304 "params": { 00:38:54.304 "process_window_size_kb": 1024, 00:38:54.304 "process_max_bandwidth_mb_sec": 0 00:38:54.304 } 00:38:54.304 }, 00:38:54.304 { 00:38:54.304 "method": "bdev_iscsi_set_options", 00:38:54.304 "params": { 00:38:54.304 "timeout_sec": 30 00:38:54.304 } 00:38:54.304 }, 00:38:54.304 { 00:38:54.304 "method": "bdev_nvme_set_options", 00:38:54.304 "params": { 00:38:54.304 "action_on_timeout": "none", 00:38:54.304 "timeout_us": 0, 00:38:54.304 "timeout_admin_us": 0, 00:38:54.304 "keep_alive_timeout_ms": 10000, 00:38:54.304 "arbitration_burst": 0, 00:38:54.304 "low_priority_weight": 0, 00:38:54.304 "medium_priority_weight": 0, 00:38:54.304 "high_priority_weight": 0, 00:38:54.304 "nvme_adminq_poll_period_us": 10000, 00:38:54.304 "nvme_ioq_poll_period_us": 0, 00:38:54.304 "io_queue_requests": 512, 00:38:54.304 "delay_cmd_submit": true, 00:38:54.304 "transport_retry_count": 4, 00:38:54.304 "bdev_retry_count": 3, 00:38:54.304 "transport_ack_timeout": 0, 00:38:54.304 "ctrlr_loss_timeout_sec": 0, 00:38:54.304 "reconnect_delay_sec": 0, 00:38:54.304 "fast_io_fail_timeout_sec": 0, 00:38:54.304 "disable_auto_failback": false, 00:38:54.304 "generate_uuids": false, 00:38:54.304 "transport_tos": 0, 00:38:54.304 "nvme_error_stat": false, 00:38:54.304 "rdma_srq_size": 0, 00:38:54.304 "io_path_stat": false, 00:38:54.304 "allow_accel_sequence": false, 00:38:54.304 "rdma_max_cq_size": 0, 00:38:54.304 "rdma_cm_event_timeout_ms": 0, 00:38:54.304 "dhchap_digests": [ 00:38:54.304 "sha256", 00:38:54.304 "sha384", 00:38:54.304 "sha512" 00:38:54.304 ], 00:38:54.304 "dhchap_dhgroups": [ 00:38:54.304 "null", 00:38:54.304 "ffdhe2048", 00:38:54.304 "ffdhe3072", 00:38:54.304 "ffdhe4096", 00:38:54.304 "ffdhe6144", 00:38:54.304 "ffdhe8192" 00:38:54.304 ] 00:38:54.304 } 00:38:54.304 }, 00:38:54.304 { 00:38:54.304 "method": "bdev_nvme_attach_controller", 00:38:54.304 "params": { 00:38:54.304 "name": "nvme0", 00:38:54.304 "trtype": "TCP", 00:38:54.304 "adrfam": "IPv4", 00:38:54.304 "traddr": "127.0.0.1", 00:38:54.304 "trsvcid": "4420", 00:38:54.304 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:54.304 "prchk_reftag": false, 00:38:54.304 "prchk_guard": false, 00:38:54.304 "ctrlr_loss_timeout_sec": 0, 00:38:54.304 "reconnect_delay_sec": 0, 00:38:54.304 "fast_io_fail_timeout_sec": 0, 00:38:54.304 "psk": "key0", 00:38:54.304 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:54.304 "hdgst": false, 00:38:54.304 "ddgst": false 00:38:54.304 } 00:38:54.304 }, 00:38:54.304 { 00:38:54.304 "method": "bdev_nvme_set_hotplug", 00:38:54.304 "params": { 00:38:54.304 "period_us": 100000, 00:38:54.304 "enable": false 00:38:54.304 } 00:38:54.304 }, 00:38:54.304 { 00:38:54.304 "method": "bdev_wait_for_examine" 00:38:54.304 } 00:38:54.304 ] 00:38:54.304 }, 00:38:54.304 { 00:38:54.304 "subsystem": "nbd", 00:38:54.304 "config": [] 00:38:54.304 } 00:38:54.304 ] 00:38:54.304 }' 00:38:54.304 06:31:05 keyring_file -- keyring/file.sh@114 -- # killprocess 341956 00:38:54.304 06:31:05 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 341956 ']' 00:38:54.304 06:31:05 keyring_file -- common/autotest_common.sh@954 -- # kill -0 341956 00:38:54.304 06:31:05 keyring_file -- common/autotest_common.sh@955 -- # uname 00:38:54.304 06:31:05 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:54.304 06:31:05 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 341956 00:38:54.304 06:31:05 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:54.304 06:31:05 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:54.304 06:31:05 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 341956' 00:38:54.304 killing process with pid 341956 00:38:54.304 06:31:05 keyring_file -- common/autotest_common.sh@969 -- # kill 341956 00:38:54.305 Received shutdown signal, test time was about 1.000000 seconds 00:38:54.305 00:38:54.305 Latency(us) 00:38:54.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:54.305 =================================================================================================================== 00:38:54.305 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:54.305 06:31:05 keyring_file -- common/autotest_common.sh@974 -- # wait 341956 00:38:55.238 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:38:55.238 06:31:06 keyring_file -- keyring/file.sh@117 -- # bperfpid=343550 00:38:55.238 06:31:06 keyring_file -- keyring/file.sh@119 -- # waitforlisten 343550 /var/tmp/bperf.sock 00:38:55.238 06:31:06 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 343550 ']' 00:38:55.238 06:31:06 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:55.238 06:31:06 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:55.238 06:31:06 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:55.238 06:31:06 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:38:55.238 "subsystems": [ 00:38:55.238 { 00:38:55.238 "subsystem": "keyring", 00:38:55.238 "config": [ 00:38:55.238 { 00:38:55.238 "method": "keyring_file_add_key", 00:38:55.238 "params": { 00:38:55.238 "name": "key0", 00:38:55.238 "path": "/tmp/tmp.dybqF2JFM1" 00:38:55.238 } 00:38:55.238 }, 00:38:55.238 { 00:38:55.238 "method": "keyring_file_add_key", 00:38:55.238 "params": { 00:38:55.238 "name": "key1", 00:38:55.238 "path": "/tmp/tmp.bhtxOpkaTE" 00:38:55.238 } 00:38:55.238 } 00:38:55.238 ] 00:38:55.238 }, 00:38:55.238 { 00:38:55.238 "subsystem": "iobuf", 00:38:55.238 "config": [ 00:38:55.238 { 00:38:55.238 "method": "iobuf_set_options", 00:38:55.238 "params": { 00:38:55.238 "small_pool_count": 8192, 00:38:55.238 "large_pool_count": 1024, 00:38:55.238 "small_bufsize": 8192, 00:38:55.238 "large_bufsize": 135168 00:38:55.238 } 00:38:55.238 } 00:38:55.238 ] 00:38:55.238 }, 00:38:55.239 { 00:38:55.239 "subsystem": "sock", 00:38:55.239 "config": [ 00:38:55.239 { 00:38:55.239 "method": "sock_set_default_impl", 00:38:55.239 "params": { 00:38:55.239 "impl_name": "posix" 00:38:55.239 } 00:38:55.239 }, 00:38:55.239 { 00:38:55.239 "method": "sock_impl_set_options", 00:38:55.239 "params": { 00:38:55.239 "impl_name": "ssl", 00:38:55.239 "recv_buf_size": 4096, 00:38:55.239 "send_buf_size": 4096, 00:38:55.239 "enable_recv_pipe": true, 00:38:55.239 "enable_quickack": false, 00:38:55.239 "enable_placement_id": 0, 00:38:55.239 "enable_zerocopy_send_server": true, 00:38:55.239 "enable_zerocopy_send_client": false, 00:38:55.239 "zerocopy_threshold": 0, 00:38:55.239 "tls_version": 0, 00:38:55.239 "enable_ktls": false 00:38:55.239 } 00:38:55.239 }, 00:38:55.239 { 00:38:55.239 "method": "sock_impl_set_options", 00:38:55.239 "params": { 00:38:55.239 "impl_name": "posix", 00:38:55.239 "recv_buf_size": 2097152, 00:38:55.239 "send_buf_size": 2097152, 00:38:55.239 "enable_recv_pipe": true, 00:38:55.239 "enable_quickack": false, 00:38:55.239 "enable_placement_id": 0, 00:38:55.239 "enable_zerocopy_send_server": true, 00:38:55.239 "enable_zerocopy_send_client": false, 00:38:55.239 "zerocopy_threshold": 0, 00:38:55.239 "tls_version": 0, 00:38:55.239 "enable_ktls": false 00:38:55.239 } 00:38:55.239 } 00:38:55.239 ] 00:38:55.239 }, 00:38:55.239 { 00:38:55.239 "subsystem": "vmd", 00:38:55.239 "config": [] 00:38:55.239 }, 00:38:55.239 { 00:38:55.239 "subsystem": "accel", 00:38:55.239 "config": [ 00:38:55.239 { 00:38:55.239 "method": "accel_set_options", 00:38:55.239 "params": { 00:38:55.239 "small_cache_size": 128, 00:38:55.239 "large_cache_size": 16, 00:38:55.239 "task_count": 2048, 00:38:55.239 "sequence_count": 2048, 00:38:55.239 "buf_count": 2048 00:38:55.239 } 00:38:55.239 } 00:38:55.239 ] 00:38:55.239 }, 00:38:55.239 { 00:38:55.239 "subsystem": "bdev", 00:38:55.239 "config": [ 00:38:55.239 { 00:38:55.239 "method": "bdev_set_options", 00:38:55.239 "params": { 00:38:55.239 "bdev_io_pool_size": 65535, 00:38:55.239 "bdev_io_cache_size": 256, 00:38:55.239 "bdev_auto_examine": true, 00:38:55.239 "iobuf_small_cache_size": 128, 00:38:55.239 "iobuf_large_cache_size": 16 00:38:55.239 } 00:38:55.239 }, 00:38:55.239 { 00:38:55.239 "method": "bdev_raid_set_options", 00:38:55.239 "params": { 00:38:55.239 "process_window_size_kb": 1024, 00:38:55.239 "process_max_bandwidth_mb_sec": 0 00:38:55.239 } 00:38:55.239 }, 00:38:55.239 { 00:38:55.239 "method": "bdev_iscsi_set_options", 00:38:55.239 "params": { 00:38:55.239 "timeout_sec": 30 00:38:55.239 } 00:38:55.239 }, 00:38:55.239 { 00:38:55.239 "method": "bdev_nvme_set_options", 00:38:55.239 "params": { 00:38:55.239 "action_on_timeout": "none", 00:38:55.239 "timeout_us": 0, 00:38:55.239 "timeout_admin_us": 0, 00:38:55.239 "keep_alive_timeout_ms": 10000, 00:38:55.239 "arbitration_burst": 0, 00:38:55.239 "low_priority_weight": 0, 00:38:55.239 "medium_priority_weight": 0, 00:38:55.239 "high_priority_weight": 0, 00:38:55.239 "nvme_adminq_poll_period_us": 10000, 00:38:55.239 "nvme_ioq_poll_period_us": 0, 00:38:55.239 "io_queue_requests": 512, 00:38:55.239 "delay_cmd_submit": true, 00:38:55.239 "transport_retry_count": 4, 00:38:55.239 "bdev_retry_count": 3, 00:38:55.239 "transport_ack_timeout": 0, 00:38:55.239 "ctrlr_loss_timeout_sec": 0, 00:38:55.239 "reconnect_delay_sec": 0, 00:38:55.239 "fast_io_fail_timeout_sec": 0, 00:38:55.239 "disable_auto_failback": false, 00:38:55.239 "generate_uuids": false, 00:38:55.239 "transport_tos": 0, 00:38:55.239 "nvme_error_stat": false, 00:38:55.239 "rdma_srq_size": 0, 00:38:55.239 "io_path_stat": false, 00:38:55.239 "allow_accel_sequence": false, 00:38:55.239 "rdma_max_cq_size": 0, 00:38:55.239 "rdma_cm_event_timeout_ms": 0, 00:38:55.239 "dhchap_digests": [ 00:38:55.239 "sha256", 00:38:55.239 "sha384", 00:38:55.239 "sha512" 00:38:55.239 ], 00:38:55.239 "dhchap_dhgroups": [ 00:38:55.239 "null", 00:38:55.239 "ffdhe2048", 00:38:55.239 "ffdhe3072", 00:38:55.239 "ffdhe4096", 00:38:55.239 "ffdhe6144", 00:38:55.239 "ffdhe8192" 00:38:55.239 ] 00:38:55.239 } 00:38:55.239 }, 00:38:55.239 { 00:38:55.239 "method": "bdev_nvme_attach_controller", 00:38:55.239 "params": { 00:38:55.239 "name": "nvme0", 00:38:55.239 "trtype": "TCP", 00:38:55.239 "adrfam": "IPv4", 00:38:55.239 "traddr": "127.0.0.1", 00:38:55.239 "trsvcid": "4420", 00:38:55.239 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:55.239 "prchk_reftag": false, 00:38:55.239 "prchk_guard": false, 00:38:55.239 "ctrlr_loss_timeout_sec": 0, 00:38:55.239 "reconnect_delay_sec": 0, 00:38:55.239 "fast_io_fail_timeout_sec": 0, 00:38:55.239 "psk": "key0", 00:38:55.239 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:55.239 "hdgst": false, 00:38:55.239 "ddgst": false 00:38:55.239 } 00:38:55.239 }, 00:38:55.239 { 00:38:55.239 "method": "bdev_nvme_set_hotplug", 00:38:55.239 "params": { 00:38:55.239 "period_us": 100000, 00:38:55.239 "enable": false 00:38:55.239 } 00:38:55.239 }, 00:38:55.239 { 00:38:55.239 "method": "bdev_wait_for_examine" 00:38:55.239 } 00:38:55.239 ] 00:38:55.239 }, 00:38:55.239 { 00:38:55.239 "subsystem": "nbd", 00:38:55.239 "config": [] 00:38:55.239 } 00:38:55.239 ] 00:38:55.239 }' 00:38:55.239 06:31:06 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:55.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:55.239 06:31:06 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:55.239 06:31:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:55.239 [2024-07-26 06:31:06.490970] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:55.239 [2024-07-26 06:31:06.491113] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid343550 ] 00:38:55.239 EAL: No free 2048 kB hugepages reported on node 1 00:38:55.498 [2024-07-26 06:31:06.618286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:55.758 [2024-07-26 06:31:06.875219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:56.016 [2024-07-26 06:31:07.315806] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:56.274 06:31:07 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:56.274 06:31:07 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:38:56.274 06:31:07 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:38:56.274 06:31:07 keyring_file -- keyring/file.sh@120 -- # jq length 00:38:56.274 06:31:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:56.533 06:31:07 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:38:56.533 06:31:07 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:38:56.533 06:31:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:56.533 06:31:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:56.533 06:31:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:56.533 06:31:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:56.533 06:31:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:56.790 06:31:07 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:56.791 06:31:07 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:38:56.791 06:31:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:56.791 06:31:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:56.791 06:31:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:56.791 06:31:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:56.791 06:31:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:57.048 06:31:08 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:38:57.048 06:31:08 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:38:57.048 06:31:08 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:38:57.048 06:31:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:57.308 06:31:08 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:38:57.308 06:31:08 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:57.308 06:31:08 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.dybqF2JFM1 /tmp/tmp.bhtxOpkaTE 00:38:57.308 06:31:08 keyring_file -- keyring/file.sh@20 -- # killprocess 343550 00:38:57.308 06:31:08 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 343550 ']' 00:38:57.308 06:31:08 keyring_file -- common/autotest_common.sh@954 -- # kill -0 343550 00:38:57.308 06:31:08 keyring_file -- common/autotest_common.sh@955 -- # uname 00:38:57.308 06:31:08 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:57.308 06:31:08 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 343550 00:38:57.308 06:31:08 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:57.308 06:31:08 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:57.308 06:31:08 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 343550' 00:38:57.308 killing process with pid 343550 00:38:57.308 06:31:08 keyring_file -- common/autotest_common.sh@969 -- # kill 343550 00:38:57.308 Received shutdown signal, test time was about 1.000000 seconds 00:38:57.308 00:38:57.308 Latency(us) 00:38:57.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:57.308 =================================================================================================================== 00:38:57.308 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:57.308 06:31:08 keyring_file -- common/autotest_common.sh@974 -- # wait 343550 00:38:58.247 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:38:58.247 06:31:09 keyring_file -- keyring/file.sh@21 -- # killprocess 341813 00:38:58.247 06:31:09 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 341813 ']' 00:38:58.247 06:31:09 keyring_file -- common/autotest_common.sh@954 -- # kill -0 341813 00:38:58.247 06:31:09 keyring_file -- common/autotest_common.sh@955 -- # uname 00:38:58.247 06:31:09 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:58.247 06:31:09 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 341813 00:38:58.247 06:31:09 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:58.247 06:31:09 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:58.247 06:31:09 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 341813' 00:38:58.247 killing process with pid 341813 00:38:58.247 06:31:09 keyring_file -- common/autotest_common.sh@969 -- # kill 341813 00:38:58.247 [2024-07-26 06:31:09.507726] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:38:58.247 06:31:09 keyring_file -- common/autotest_common.sh@974 -- # wait 341813 00:39:00.810 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:39:00.810 00:39:00.810 real 0m19.491s 00:39:00.810 user 0m42.804s 00:39:00.810 sys 0m3.787s 00:39:00.810 06:31:11 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:00.810 06:31:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:00.810 ************************************ 00:39:00.810 END TEST keyring_file 00:39:00.810 ************************************ 00:39:00.810 06:31:12 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:39:00.810 06:31:12 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:00.810 06:31:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:00.810 06:31:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:00.810 06:31:12 -- common/autotest_common.sh@10 -- # set +x 00:39:00.810 ************************************ 00:39:00.810 START TEST keyring_linux 00:39:00.810 ************************************ 00:39:00.810 06:31:12 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:00.810 * Looking for test storage... 00:39:00.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:00.810 06:31:12 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:00.810 06:31:12 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:00.810 06:31:12 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:00.810 06:31:12 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:00.810 06:31:12 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:00.810 06:31:12 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.810 06:31:12 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.810 06:31:12 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.810 06:31:12 keyring_linux -- paths/export.sh@5 -- # export PATH 00:39:00.810 06:31:12 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:00.810 06:31:12 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:00.810 06:31:12 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:00.810 06:31:12 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:00.810 06:31:12 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:39:00.810 06:31:12 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:39:00.810 06:31:12 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:39:00.810 06:31:12 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:39:00.810 06:31:12 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:00.810 06:31:12 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:39:00.810 06:31:12 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:00.810 06:31:12 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:00.810 06:31:12 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:39:00.810 06:31:12 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:39:00.810 06:31:12 keyring_linux -- nvmf/common.sh@705 -- # python - 00:39:01.068 06:31:12 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:39:01.068 06:31:12 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:39:01.068 /tmp/:spdk-test:key0 00:39:01.068 06:31:12 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:39:01.068 06:31:12 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:01.068 06:31:12 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:39:01.068 06:31:12 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:01.068 06:31:12 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:01.068 06:31:12 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:39:01.068 06:31:12 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:01.068 06:31:12 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:01.068 06:31:12 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:39:01.068 06:31:12 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:01.068 06:31:12 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:39:01.068 06:31:12 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:39:01.068 06:31:12 keyring_linux -- nvmf/common.sh@705 -- # python - 00:39:01.069 06:31:12 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:39:01.069 06:31:12 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:39:01.069 /tmp/:spdk-test:key1 00:39:01.069 06:31:12 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=344310 00:39:01.069 06:31:12 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:01.069 06:31:12 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 344310 00:39:01.069 06:31:12 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 344310 ']' 00:39:01.069 06:31:12 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:01.069 06:31:12 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:01.069 06:31:12 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:01.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:01.069 06:31:12 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:01.069 06:31:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:01.069 [2024-07-26 06:31:12.295139] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:01.069 [2024-07-26 06:31:12.295290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid344310 ] 00:39:01.069 EAL: No free 2048 kB hugepages reported on node 1 00:39:01.328 [2024-07-26 06:31:12.417662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:01.328 [2024-07-26 06:31:12.636104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:02.267 06:31:13 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:02.267 06:31:13 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:39:02.267 06:31:13 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:39:02.267 06:31:13 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.267 06:31:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:02.267 [2024-07-26 06:31:13.424166] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:02.267 null0 00:39:02.267 [2024-07-26 06:31:13.456201] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:02.267 [2024-07-26 06:31:13.456818] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:02.267 06:31:13 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.267 06:31:13 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:39:02.267 576643211 00:39:02.267 06:31:13 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:39:02.267 824348792 00:39:02.267 06:31:13 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=344446 00:39:02.267 06:31:13 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 344446 /var/tmp/bperf.sock 00:39:02.267 06:31:13 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:39:02.267 06:31:13 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 344446 ']' 00:39:02.267 06:31:13 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:02.267 06:31:13 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:02.267 06:31:13 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:02.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:02.267 06:31:13 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:02.267 06:31:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:02.267 [2024-07-26 06:31:13.562180] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:02.267 [2024-07-26 06:31:13.562324] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid344446 ] 00:39:02.525 EAL: No free 2048 kB hugepages reported on node 1 00:39:02.525 [2024-07-26 06:31:13.692135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:02.783 [2024-07-26 06:31:13.949607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:03.347 06:31:14 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:03.347 06:31:14 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:39:03.347 06:31:14 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:39:03.347 06:31:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:39:03.603 06:31:14 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:39:03.603 06:31:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:04.168 06:31:15 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:04.168 06:31:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:04.426 [2024-07-26 06:31:15.555175] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:04.426 nvme0n1 00:39:04.426 06:31:15 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:39:04.426 06:31:15 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:39:04.426 06:31:15 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:04.426 06:31:15 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:04.426 06:31:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:04.426 06:31:15 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:04.686 06:31:15 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:39:04.686 06:31:15 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:04.686 06:31:15 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:39:04.686 06:31:15 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:39:04.686 06:31:15 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:04.686 06:31:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:04.686 06:31:15 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:39:04.946 06:31:16 keyring_linux -- keyring/linux.sh@25 -- # sn=576643211 00:39:04.946 06:31:16 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:39:04.946 06:31:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:04.946 06:31:16 keyring_linux -- keyring/linux.sh@26 -- # [[ 576643211 == \5\7\6\6\4\3\2\1\1 ]] 00:39:04.946 06:31:16 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 576643211 00:39:04.946 06:31:16 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:39:04.946 06:31:16 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:04.946 Running I/O for 1 seconds... 00:39:06.321 00:39:06.321 Latency(us) 00:39:06.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:06.321 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:06.321 nvme0n1 : 1.02 4840.32 18.91 0.00 0.00 26219.82 11845.03 42137.22 00:39:06.321 =================================================================================================================== 00:39:06.321 Total : 4840.32 18.91 0.00 0.00 26219.82 11845.03 42137.22 00:39:06.321 0 00:39:06.321 06:31:17 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:06.321 06:31:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:06.321 06:31:17 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:39:06.321 06:31:17 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:39:06.321 06:31:17 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:06.321 06:31:17 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:06.321 06:31:17 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:06.321 06:31:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:06.579 06:31:17 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:39:06.579 06:31:17 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:06.579 06:31:17 keyring_linux -- keyring/linux.sh@23 -- # return 00:39:06.579 06:31:17 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:06.579 06:31:17 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:39:06.579 06:31:17 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:06.579 06:31:17 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:39:06.579 06:31:17 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:06.579 06:31:17 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:39:06.579 06:31:17 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:06.579 06:31:17 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:06.579 06:31:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:06.838 [2024-07-26 06:31:18.042223] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:06.838 [2024-07-26 06:31:18.042248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7000 (107): Transport endpoint is not connected 00:39:06.838 [2024-07-26 06:31:18.043222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7000 (9): Bad file descriptor 00:39:06.838 [2024-07-26 06:31:18.044219] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:06.838 [2024-07-26 06:31:18.044250] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:06.838 [2024-07-26 06:31:18.044271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:06.838 request: 00:39:06.838 { 00:39:06.838 "name": "nvme0", 00:39:06.838 "trtype": "tcp", 00:39:06.838 "traddr": "127.0.0.1", 00:39:06.838 "adrfam": "ipv4", 00:39:06.838 "trsvcid": "4420", 00:39:06.838 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:06.838 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:06.838 "prchk_reftag": false, 00:39:06.838 "prchk_guard": false, 00:39:06.838 "hdgst": false, 00:39:06.838 "ddgst": false, 00:39:06.838 "psk": ":spdk-test:key1", 00:39:06.838 "method": "bdev_nvme_attach_controller", 00:39:06.838 "req_id": 1 00:39:06.838 } 00:39:06.838 Got JSON-RPC error response 00:39:06.838 response: 00:39:06.838 { 00:39:06.838 "code": -5, 00:39:06.838 "message": "Input/output error" 00:39:06.838 } 00:39:06.838 06:31:18 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:39:06.838 06:31:18 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:06.838 06:31:18 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:06.838 06:31:18 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:06.838 06:31:18 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:39:06.838 06:31:18 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:06.838 06:31:18 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:39:06.838 06:31:18 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:39:06.838 06:31:18 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:39:06.838 06:31:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:06.838 06:31:18 keyring_linux -- keyring/linux.sh@33 -- # sn=576643211 00:39:06.838 06:31:18 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 576643211 00:39:06.838 1 links removed 00:39:06.838 06:31:18 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:06.838 06:31:18 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:39:06.838 06:31:18 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:39:06.838 06:31:18 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:39:06.838 06:31:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:39:06.838 06:31:18 keyring_linux -- keyring/linux.sh@33 -- # sn=824348792 00:39:06.838 06:31:18 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 824348792 00:39:06.838 1 links removed 00:39:06.838 06:31:18 keyring_linux -- keyring/linux.sh@41 -- # killprocess 344446 00:39:06.838 06:31:18 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 344446 ']' 00:39:06.838 06:31:18 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 344446 00:39:06.838 06:31:18 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:39:06.838 06:31:18 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:06.838 06:31:18 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 344446 00:39:06.838 06:31:18 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:06.838 06:31:18 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:06.838 06:31:18 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 344446' 00:39:06.838 killing process with pid 344446 00:39:06.838 06:31:18 keyring_linux -- common/autotest_common.sh@969 -- # kill 344446 00:39:06.838 Received shutdown signal, test time was about 1.000000 seconds 00:39:06.838 00:39:06.838 Latency(us) 00:39:06.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:06.838 =================================================================================================================== 00:39:06.838 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:06.839 06:31:18 keyring_linux -- common/autotest_common.sh@974 -- # wait 344446 00:39:07.776 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:39:08.034 06:31:19 keyring_linux -- keyring/linux.sh@42 -- # killprocess 344310 00:39:08.034 06:31:19 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 344310 ']' 00:39:08.034 06:31:19 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 344310 00:39:08.034 06:31:19 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:39:08.034 06:31:19 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:08.034 06:31:19 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 344310 00:39:08.034 06:31:19 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:08.034 06:31:19 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:08.034 06:31:19 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 344310' 00:39:08.034 killing process with pid 344310 00:39:08.034 06:31:19 keyring_linux -- common/autotest_common.sh@969 -- # kill 344310 00:39:08.034 06:31:19 keyring_linux -- common/autotest_common.sh@974 -- # wait 344310 00:39:10.565 libgcov profiling error:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/bdev/nvme/bdev_nvme.gcda:Merge mismatch for function 280 00:39:10.565 00:39:10.565 real 0m9.609s 00:39:10.565 user 0m15.904s 00:39:10.565 sys 0m2.004s 00:39:10.565 06:31:21 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:10.566 06:31:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:10.566 ************************************ 00:39:10.566 END TEST keyring_linux 00:39:10.566 ************************************ 00:39:10.566 06:31:21 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:39:10.566 06:31:21 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:39:10.566 06:31:21 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:39:10.566 06:31:21 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:39:10.566 06:31:21 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:39:10.566 06:31:21 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:39:10.566 06:31:21 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:39:10.566 06:31:21 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:39:10.566 06:31:21 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:39:10.566 06:31:21 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:39:10.566 06:31:21 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:39:10.566 06:31:21 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:39:10.566 06:31:21 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:39:10.566 06:31:21 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:39:10.566 06:31:21 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:39:10.566 06:31:21 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:39:10.566 06:31:21 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:39:10.566 06:31:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:10.566 06:31:21 -- common/autotest_common.sh@10 -- # set +x 00:39:10.566 06:31:21 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:39:10.566 06:31:21 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:39:10.566 06:31:21 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:39:10.566 06:31:21 -- common/autotest_common.sh@10 -- # set +x 00:39:12.466 INFO: APP EXITING 00:39:12.466 INFO: killing all VMs 00:39:12.466 INFO: killing vhost app 00:39:12.466 INFO: EXIT DONE 00:39:13.400 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:39:13.400 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:39:13.400 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:39:13.400 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:39:13.400 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:39:13.400 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:39:13.400 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:39:13.400 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:39:13.400 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:39:13.400 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:39:13.400 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:39:13.400 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:39:13.400 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:39:13.400 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:39:13.400 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:39:13.400 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:39:13.400 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:39:14.775 Cleaning 00:39:14.775 Removing: /var/run/dpdk/spdk0/config 00:39:14.775 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:14.775 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:14.775 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:14.775 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:14.775 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:14.775 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:14.775 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:14.775 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:14.775 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:14.775 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:14.775 Removing: /var/run/dpdk/spdk1/config 00:39:14.775 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:14.775 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:14.775 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:14.775 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:14.775 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:14.775 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:14.775 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:14.775 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:14.775 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:14.775 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:14.775 Removing: /var/run/dpdk/spdk1/mp_socket 00:39:14.775 Removing: /var/run/dpdk/spdk2/config 00:39:14.775 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:14.775 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:14.775 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:14.775 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:14.776 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:14.776 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:14.776 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:14.776 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:14.776 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:14.776 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:14.776 Removing: /var/run/dpdk/spdk3/config 00:39:14.776 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:14.776 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:14.776 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:14.776 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:14.776 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:14.776 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:14.776 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:14.776 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:14.776 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:14.776 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:14.776 Removing: /var/run/dpdk/spdk4/config 00:39:14.776 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:14.776 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:14.776 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:14.776 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:14.776 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:14.776 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:14.776 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:14.776 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:14.776 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:14.776 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:14.776 Removing: /dev/shm/bdev_svc_trace.1 00:39:14.776 Removing: /dev/shm/nvmf_trace.0 00:39:14.776 Removing: /dev/shm/spdk_tgt_trace.pid3155 00:39:15.035 Removing: /var/run/dpdk/spdk0 00:39:15.035 Removing: /var/run/dpdk/spdk1 00:39:15.035 Removing: /var/run/dpdk/spdk2 00:39:15.035 Removing: /var/run/dpdk/spdk3 00:39:15.035 Removing: /var/run/dpdk/spdk4 00:39:15.035 Removing: /var/run/dpdk/spdk_pid10317 00:39:15.035 Removing: /var/run/dpdk/spdk_pid10913 00:39:15.035 Removing: /var/run/dpdk/spdk_pid11493 00:39:15.035 Removing: /var/run/dpdk/spdk_pid12092 00:39:15.035 Removing: /var/run/dpdk/spdk_pid12565 00:39:15.035 Removing: /var/run/dpdk/spdk_pid1281 00:39:15.035 Removing: /var/run/dpdk/spdk_pid12847 00:39:15.035 Removing: /var/run/dpdk/spdk_pid13129 00:39:15.035 Removing: /var/run/dpdk/spdk_pid13327 00:39:15.035 Removing: /var/run/dpdk/spdk_pid134595 00:39:15.035 Removing: /var/run/dpdk/spdk_pid138016 00:39:15.035 Removing: /var/run/dpdk/spdk_pid13892 00:39:15.035 Removing: /var/run/dpdk/spdk_pid142096 00:39:15.035 Removing: /var/run/dpdk/spdk_pid147886 00:39:15.035 Removing: /var/run/dpdk/spdk_pid16517 00:39:15.035 Removing: /var/run/dpdk/spdk_pid17077 00:39:15.035 Removing: /var/run/dpdk/spdk_pid173925 00:39:15.035 Removing: /var/run/dpdk/spdk_pid17637 00:39:15.035 Removing: /var/run/dpdk/spdk_pid177092 00:39:15.035 Removing: /var/run/dpdk/spdk_pid17781 00:39:15.035 Removing: /var/run/dpdk/spdk_pid178273 00:39:15.035 Removing: /var/run/dpdk/spdk_pid179722 00:39:15.035 Removing: /var/run/dpdk/spdk_pid179993 00:39:15.035 Removing: /var/run/dpdk/spdk_pid180263 00:39:15.035 Removing: /var/run/dpdk/spdk_pid180541 00:39:15.035 Removing: /var/run/dpdk/spdk_pid181269 00:39:15.035 Removing: /var/run/dpdk/spdk_pid182874 00:39:15.035 Removing: /var/run/dpdk/spdk_pid184701 00:39:15.035 Removing: /var/run/dpdk/spdk_pid185396 00:39:15.035 Removing: /var/run/dpdk/spdk_pid187274 00:39:15.035 Removing: /var/run/dpdk/spdk_pid188076 00:39:15.035 Removing: /var/run/dpdk/spdk_pid188817 00:39:15.035 Removing: /var/run/dpdk/spdk_pid19129 00:39:15.035 Removing: /var/run/dpdk/spdk_pid191586 00:39:15.035 Removing: /var/run/dpdk/spdk_pid19274 00:39:15.035 Removing: /var/run/dpdk/spdk_pid195236 00:39:15.035 Removing: /var/run/dpdk/spdk_pid198769 00:39:15.035 Removing: /var/run/dpdk/spdk_pid20569 00:39:15.035 Removing: /var/run/dpdk/spdk_pid20773 00:39:15.035 Removing: /var/run/dpdk/spdk_pid21213 00:39:15.035 Removing: /var/run/dpdk/spdk_pid21353 00:39:15.035 Removing: /var/run/dpdk/spdk_pid21776 00:39:15.035 Removing: /var/run/dpdk/spdk_pid21927 00:39:15.035 Removing: /var/run/dpdk/spdk_pid223309 00:39:15.035 Removing: /var/run/dpdk/spdk_pid226215 00:39:15.035 Removing: /var/run/dpdk/spdk_pid22959 00:39:15.035 Removing: /var/run/dpdk/spdk_pid230355 00:39:15.035 Removing: /var/run/dpdk/spdk_pid231836 00:39:15.035 Removing: /var/run/dpdk/spdk_pid23239 00:39:15.035 Removing: /var/run/dpdk/spdk_pid233585 00:39:15.035 Removing: /var/run/dpdk/spdk_pid23554 00:39:15.035 Removing: /var/run/dpdk/spdk_pid236556 00:39:15.035 Removing: /var/run/dpdk/spdk_pid239301 00:39:15.035 Removing: /var/run/dpdk/spdk_pid244524 00:39:15.035 Removing: /var/run/dpdk/spdk_pid244527 00:39:15.035 Removing: /var/run/dpdk/spdk_pid247561 00:39:15.035 Removing: /var/run/dpdk/spdk_pid247702 00:39:15.035 Removing: /var/run/dpdk/spdk_pid247843 00:39:15.035 Removing: /var/run/dpdk/spdk_pid248227 00:39:15.035 Removing: /var/run/dpdk/spdk_pid248235 00:39:15.035 Removing: /var/run/dpdk/spdk_pid249428 00:39:15.035 Removing: /var/run/dpdk/spdk_pid250612 00:39:15.035 Removing: /var/run/dpdk/spdk_pid251877 00:39:15.035 Removing: /var/run/dpdk/spdk_pid253090 00:39:15.035 Removing: /var/run/dpdk/spdk_pid254282 00:39:15.035 Removing: /var/run/dpdk/spdk_pid255465 00:39:15.035 Removing: /var/run/dpdk/spdk_pid259396 00:39:15.035 Removing: /var/run/dpdk/spdk_pid259844 00:39:15.035 Removing: /var/run/dpdk/spdk_pid26035 00:39:15.035 Removing: /var/run/dpdk/spdk_pid261239 00:39:15.035 Removing: /var/run/dpdk/spdk_pid262042 00:39:15.035 Removing: /var/run/dpdk/spdk_pid266062 00:39:15.035 Removing: /var/run/dpdk/spdk_pid268171 00:39:15.035 Removing: /var/run/dpdk/spdk_pid272474 00:39:15.035 Removing: /var/run/dpdk/spdk_pid276326 00:39:15.035 Removing: /var/run/dpdk/spdk_pid283046 00:39:15.035 Removing: /var/run/dpdk/spdk_pid287657 00:39:15.035 Removing: /var/run/dpdk/spdk_pid287661 00:39:15.035 Removing: /var/run/dpdk/spdk_pid28803 00:39:15.035 Removing: /var/run/dpdk/spdk_pid300136 00:39:15.035 Removing: /var/run/dpdk/spdk_pid300797 00:39:15.035 Removing: /var/run/dpdk/spdk_pid301402 00:39:15.035 Removing: /var/run/dpdk/spdk_pid302021 00:39:15.035 Removing: /var/run/dpdk/spdk_pid303598 00:39:15.035 Removing: /var/run/dpdk/spdk_pid304271 00:39:15.035 Removing: /var/run/dpdk/spdk_pid304937 00:39:15.035 Removing: /var/run/dpdk/spdk_pid305503 00:39:15.035 Removing: /var/run/dpdk/spdk_pid308378 00:39:15.035 Removing: /var/run/dpdk/spdk_pid308655 00:39:15.035 Removing: /var/run/dpdk/spdk_pid312725 00:39:15.035 Removing: /var/run/dpdk/spdk_pid313008 00:39:15.035 Removing: /var/run/dpdk/spdk_pid314743 00:39:15.035 Removing: /var/run/dpdk/spdk_pid3155 00:39:15.035 Removing: /var/run/dpdk/spdk_pid320174 00:39:15.035 Removing: /var/run/dpdk/spdk_pid320180 00:39:15.035 Removing: /var/run/dpdk/spdk_pid323319 00:39:15.035 Removing: /var/run/dpdk/spdk_pid324841 00:39:15.035 Removing: /var/run/dpdk/spdk_pid326357 00:39:15.035 Removing: /var/run/dpdk/spdk_pid327232 00:39:15.035 Removing: /var/run/dpdk/spdk_pid328754 00:39:15.035 Removing: /var/run/dpdk/spdk_pid329754 00:39:15.035 Removing: /var/run/dpdk/spdk_pid336018 00:39:15.294 Removing: /var/run/dpdk/spdk_pid336415 00:39:15.294 Removing: /var/run/dpdk/spdk_pid336807 00:39:15.294 Removing: /var/run/dpdk/spdk_pid338699 00:39:15.294 Removing: /var/run/dpdk/spdk_pid339014 00:39:15.294 Removing: /var/run/dpdk/spdk_pid339377 00:39:15.294 Removing: /var/run/dpdk/spdk_pid341813 00:39:15.294 Removing: /var/run/dpdk/spdk_pid341956 00:39:15.294 Removing: /var/run/dpdk/spdk_pid343550 00:39:15.294 Removing: /var/run/dpdk/spdk_pid344310 00:39:15.294 Removing: /var/run/dpdk/spdk_pid344446 00:39:15.294 Removing: /var/run/dpdk/spdk_pid36525 00:39:15.294 Removing: /var/run/dpdk/spdk_pid36933 00:39:15.294 Removing: /var/run/dpdk/spdk_pid3912 00:39:15.294 Removing: /var/run/dpdk/spdk_pid39589 00:39:15.294 Removing: /var/run/dpdk/spdk_pid39868 00:39:15.294 Removing: /var/run/dpdk/spdk_pid4194051 00:39:15.294 Removing: /var/run/dpdk/spdk_pid42769 00:39:15.294 Removing: /var/run/dpdk/spdk_pid46737 00:39:15.294 Removing: /var/run/dpdk/spdk_pid4871 00:39:15.294 Removing: /var/run/dpdk/spdk_pid48939 00:39:15.294 Removing: /var/run/dpdk/spdk_pid5291 00:39:15.294 Removing: /var/run/dpdk/spdk_pid56155 00:39:15.294 Removing: /var/run/dpdk/spdk_pid61789 00:39:15.294 Removing: /var/run/dpdk/spdk_pid63243 00:39:15.294 Removing: /var/run/dpdk/spdk_pid6385 00:39:15.294 Removing: /var/run/dpdk/spdk_pid64045 00:39:15.294 Removing: /var/run/dpdk/spdk_pid6646 00:39:15.294 Removing: /var/run/dpdk/spdk_pid75677 00:39:15.294 Removing: /var/run/dpdk/spdk_pid7595 00:39:15.294 Removing: /var/run/dpdk/spdk_pid78220 00:39:15.294 Removing: /var/run/dpdk/spdk_pid9129 00:39:15.294 Clean 00:39:15.294 06:31:26 -- common/autotest_common.sh@1451 -- # return 0 00:39:15.294 06:31:26 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:39:15.294 06:31:26 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:15.294 06:31:26 -- common/autotest_common.sh@10 -- # set +x 00:39:15.294 06:31:26 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:39:15.294 06:31:26 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:15.294 06:31:26 -- common/autotest_common.sh@10 -- # set +x 00:39:15.294 06:31:26 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:15.294 06:31:26 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:39:15.294 06:31:26 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:39:15.294 06:31:26 -- spdk/autotest.sh@395 -- # hash lcov 00:39:15.294 06:31:26 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:39:15.294 06:31:26 -- spdk/autotest.sh@397 -- # hostname 00:39:15.294 06:31:26 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:39:15.580 geninfo: WARNING: invalid characters removed from testname! 00:39:42.102 06:31:52 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:46.281 06:31:56 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:48.806 06:31:59 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:51.325 06:32:02 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:53.857 06:32:05 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:57.134 06:32:07 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:59.695 06:32:10 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:59.695 06:32:10 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:59.695 06:32:10 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:39:59.695 06:32:10 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:59.695 06:32:10 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:59.695 06:32:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:59.695 06:32:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:59.695 06:32:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:59.695 06:32:10 -- paths/export.sh@5 -- $ export PATH 00:39:59.695 06:32:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:59.695 06:32:10 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:39:59.695 06:32:10 -- common/autobuild_common.sh@447 -- $ date +%s 00:39:59.695 06:32:10 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721968330.XXXXXX 00:39:59.695 06:32:10 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721968330.Wlkhjb 00:39:59.695 06:32:10 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:39:59.695 06:32:10 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:39:59.695 06:32:10 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:39:59.695 06:32:10 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:39:59.695 06:32:10 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:39:59.695 06:32:10 -- common/autobuild_common.sh@463 -- $ get_config_params 00:39:59.695 06:32:10 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:39:59.695 06:32:10 -- common/autotest_common.sh@10 -- $ set +x 00:39:59.695 06:32:10 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:39:59.695 06:32:10 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:39:59.695 06:32:10 -- pm/common@17 -- $ local monitor 00:39:59.695 06:32:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:59.695 06:32:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:59.695 06:32:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:59.695 06:32:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:59.695 06:32:10 -- pm/common@21 -- $ date +%s 00:39:59.695 06:32:10 -- pm/common@21 -- $ date +%s 00:39:59.695 06:32:10 -- pm/common@25 -- $ sleep 1 00:39:59.695 06:32:10 -- pm/common@21 -- $ date +%s 00:39:59.695 06:32:10 -- pm/common@21 -- $ date +%s 00:39:59.695 06:32:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721968330 00:39:59.695 06:32:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721968330 00:39:59.695 06:32:10 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721968330 00:39:59.695 06:32:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721968330 00:39:59.695 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721968330_collect-vmstat.pm.log 00:39:59.696 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721968330_collect-cpu-load.pm.log 00:39:59.696 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721968330_collect-cpu-temp.pm.log 00:39:59.696 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721968330_collect-bmc-pm.bmc.pm.log 00:40:00.636 06:32:11 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:40:00.636 06:32:11 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:40:00.636 06:32:11 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:00.637 06:32:11 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:40:00.637 06:32:11 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:40:00.637 06:32:11 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:40:00.637 06:32:11 -- spdk/autopackage.sh@19 -- $ timing_finish 00:40:00.637 06:32:11 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:00.637 06:32:11 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:40:00.637 06:32:11 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:00.637 06:32:11 -- spdk/autopackage.sh@20 -- $ exit 0 00:40:00.637 06:32:11 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:40:00.637 06:32:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:40:00.637 06:32:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:40:00.637 06:32:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:00.637 06:32:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:40:00.637 06:32:11 -- pm/common@44 -- $ pid=356705 00:40:00.637 06:32:11 -- pm/common@50 -- $ kill -TERM 356705 00:40:00.637 06:32:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:00.637 06:32:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:40:00.637 06:32:11 -- pm/common@44 -- $ pid=356707 00:40:00.637 06:32:11 -- pm/common@50 -- $ kill -TERM 356707 00:40:00.637 06:32:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:00.637 06:32:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:40:00.637 06:32:11 -- pm/common@44 -- $ pid=356709 00:40:00.637 06:32:11 -- pm/common@50 -- $ kill -TERM 356709 00:40:00.637 06:32:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:00.637 06:32:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:40:00.637 06:32:11 -- pm/common@44 -- $ pid=356737 00:40:00.637 06:32:11 -- pm/common@50 -- $ sudo -E kill -TERM 356737 00:40:00.637 + [[ -n 4107761 ]] 00:40:00.637 + sudo kill 4107761 00:40:00.647 [Pipeline] } 00:40:00.665 [Pipeline] // stage 00:40:00.671 [Pipeline] } 00:40:00.689 [Pipeline] // timeout 00:40:00.695 [Pipeline] } 00:40:00.712 [Pipeline] // catchError 00:40:00.718 [Pipeline] } 00:40:00.736 [Pipeline] // wrap 00:40:00.745 [Pipeline] } 00:40:00.761 [Pipeline] // catchError 00:40:00.771 [Pipeline] stage 00:40:00.773 [Pipeline] { (Epilogue) 00:40:00.788 [Pipeline] catchError 00:40:00.790 [Pipeline] { 00:40:00.805 [Pipeline] echo 00:40:00.807 Cleanup processes 00:40:00.813 [Pipeline] sh 00:40:01.099 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:01.099 356839 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:40:01.099 356971 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:01.114 [Pipeline] sh 00:40:01.400 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:01.400 ++ grep -v 'sudo pgrep' 00:40:01.400 ++ awk '{print $1}' 00:40:01.400 + sudo kill -9 356839 00:40:01.413 [Pipeline] sh 00:40:01.701 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:11.677 [Pipeline] sh 00:40:11.963 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:11.963 Artifacts sizes are good 00:40:11.980 [Pipeline] archiveArtifacts 00:40:11.988 Archiving artifacts 00:40:12.210 [Pipeline] sh 00:40:12.500 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:40:12.513 [Pipeline] cleanWs 00:40:12.521 [WS-CLEANUP] Deleting project workspace... 00:40:12.521 [WS-CLEANUP] Deferred wipeout is used... 00:40:12.528 [WS-CLEANUP] done 00:40:12.529 [Pipeline] } 00:40:12.546 [Pipeline] // catchError 00:40:12.558 [Pipeline] sh 00:40:12.844 + logger -p user.info -t JENKINS-CI 00:40:12.852 [Pipeline] } 00:40:12.869 [Pipeline] // stage 00:40:12.876 [Pipeline] } 00:40:12.895 [Pipeline] // node 00:40:12.901 [Pipeline] End of Pipeline 00:40:12.939 Finished: SUCCESS